00:00:00.002 Started by upstream project "autotest-per-patch" build number 132399 00:00:00.002 originally caused by: 00:00:00.003 Started by user sys_sgci 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.296 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.309 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.323 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.323 > git config core.sparsecheckout # timeout=10 00:00:06.335 > git read-tree -mu HEAD # timeout=10 00:00:06.353 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.373 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.373 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.463 [Pipeline] Start of Pipeline 00:00:06.474 [Pipeline] library 00:00:06.475 Loading library shm_lib@master 00:00:06.475 Library shm_lib@master is cached. Copying from home. 00:00:06.487 [Pipeline] node 00:00:06.496 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.497 [Pipeline] { 00:00:06.503 [Pipeline] catchError 00:00:06.504 [Pipeline] { 00:00:06.515 [Pipeline] wrap 00:00:06.522 [Pipeline] { 00:00:06.528 [Pipeline] stage 00:00:06.530 [Pipeline] { (Prologue) 00:00:06.707 [Pipeline] sh 00:00:06.989 + logger -p user.info -t JENKINS-CI 00:00:07.009 [Pipeline] echo 00:00:07.011 Node: WFP8 00:00:07.019 [Pipeline] sh 00:00:07.317 [Pipeline] setCustomBuildProperty 00:00:07.329 [Pipeline] echo 00:00:07.330 Cleanup processes 00:00:07.334 [Pipeline] sh 00:00:07.615 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.615 1885482 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.626 [Pipeline] sh 00:00:07.910 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.910 ++ grep -v 'sudo pgrep' 00:00:07.910 ++ awk '{print $1}' 00:00:07.910 + sudo kill -9 00:00:07.910 + true 00:00:07.925 [Pipeline] cleanWs 00:00:07.933 [WS-CLEANUP] Deleting project workspace... 00:00:07.933 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.939 [WS-CLEANUP] done 00:00:07.942 [Pipeline] setCustomBuildProperty 00:00:07.953 [Pipeline] sh 00:00:08.231 + sudo git config --global --replace-all safe.directory '*' 00:00:08.317 [Pipeline] httpRequest 00:00:08.700 [Pipeline] echo 00:00:08.701 Sorcerer 10.211.164.20 is alive 00:00:08.711 [Pipeline] retry 00:00:08.713 [Pipeline] { 00:00:08.726 [Pipeline] httpRequest 00:00:08.730 HttpMethod: GET 00:00:08.730 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.731 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.750 Response Code: HTTP/1.1 200 OK 00:00:08.751 Success: Status code 200 is in the accepted range: 200,404 00:00:08.751 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.963 [Pipeline] } 00:00:34.984 [Pipeline] // retry 00:00:34.993 [Pipeline] sh 00:00:35.281 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.297 [Pipeline] httpRequest 00:00:35.685 [Pipeline] echo 00:00:35.687 Sorcerer 10.211.164.20 is alive 00:00:35.696 [Pipeline] retry 00:00:35.698 [Pipeline] { 00:00:35.712 [Pipeline] httpRequest 00:00:35.717 HttpMethod: GET 00:00:35.717 URL: http://10.211.164.20/packages/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:00:35.718 Sending request to url: http://10.211.164.20/packages/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:00:35.721 Response Code: HTTP/1.1 200 OK 00:00:35.721 Success: Status code 200 is in the accepted range: 200,404 00:00:35.722 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:01:07.005 [Pipeline] } 00:01:07.024 [Pipeline] // retry 00:01:07.034 [Pipeline] sh 00:01:07.318 + tar --no-same-owner -xf spdk_ede20dc4e93c688eb6e71dded535a45c7193fb9c.tar.gz 00:01:09.863 [Pipeline] sh 00:01:10.151 + git -C spdk log --oneline -n5 00:01:10.152 ede20dc4e lib/nvmf: Fix double free of connect request 00:01:10.152 bc5264bd5 nvme: Fix discovery loop when target has no entry 00:01:10.152 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:10.152 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:10.152 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:10.163 [Pipeline] } 00:01:10.181 [Pipeline] // stage 00:01:10.189 [Pipeline] stage 00:01:10.191 [Pipeline] { (Prepare) 00:01:10.211 [Pipeline] writeFile 00:01:10.231 [Pipeline] sh 00:01:10.516 + logger -p user.info -t JENKINS-CI 00:01:10.529 [Pipeline] sh 00:01:10.814 + logger -p user.info -t JENKINS-CI 00:01:10.827 [Pipeline] sh 00:01:11.115 + cat autorun-spdk.conf 00:01:11.115 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.115 SPDK_TEST_NVMF=1 00:01:11.115 SPDK_TEST_NVME_CLI=1 00:01:11.115 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.115 SPDK_TEST_NVMF_NICS=e810 00:01:11.115 SPDK_TEST_VFIOUSER=1 00:01:11.115 SPDK_RUN_UBSAN=1 00:01:11.115 NET_TYPE=phy 00:01:11.180 RUN_NIGHTLY=0 00:01:11.186 [Pipeline] readFile 00:01:11.219 [Pipeline] withEnv 00:01:11.222 [Pipeline] { 00:01:11.238 [Pipeline] sh 00:01:11.528 + set -ex 00:01:11.528 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.528 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.528 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.528 ++ SPDK_TEST_NVMF=1 00:01:11.528 ++ SPDK_TEST_NVME_CLI=1 00:01:11.528 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.528 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.528 ++ SPDK_TEST_VFIOUSER=1 00:01:11.528 ++ SPDK_RUN_UBSAN=1 00:01:11.528 ++ NET_TYPE=phy 00:01:11.528 ++ RUN_NIGHTLY=0 00:01:11.528 + case $SPDK_TEST_NVMF_NICS in 00:01:11.528 + DRIVERS=ice 00:01:11.528 + [[ tcp == \r\d\m\a ]] 00:01:11.528 + [[ -n ice ]] 00:01:11.528 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.528 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.528 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.528 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.528 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.528 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.528 + true 00:01:11.528 + for D in $DRIVERS 00:01:11.528 + sudo modprobe ice 00:01:11.528 + exit 0 00:01:11.537 [Pipeline] } 00:01:11.554 [Pipeline] // withEnv 00:01:11.560 [Pipeline] } 00:01:11.576 [Pipeline] // stage 00:01:11.587 [Pipeline] catchError 00:01:11.590 [Pipeline] { 00:01:11.607 [Pipeline] timeout 00:01:11.607 Timeout set to expire in 1 hr 0 min 00:01:11.609 [Pipeline] { 00:01:11.627 [Pipeline] stage 00:01:11.630 [Pipeline] { (Tests) 00:01:11.648 [Pipeline] sh 00:01:11.936 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.936 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.936 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.936 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.936 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.936 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.936 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.936 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.936 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.936 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.936 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.936 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.936 + source /etc/os-release 00:01:11.936 ++ NAME='Fedora Linux' 00:01:11.936 ++ VERSION='39 (Cloud Edition)' 00:01:11.936 ++ ID=fedora 00:01:11.936 ++ VERSION_ID=39 00:01:11.936 ++ VERSION_CODENAME= 00:01:11.936 ++ PLATFORM_ID=platform:f39 00:01:11.936 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:11.936 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.936 ++ LOGO=fedora-logo-icon 00:01:11.936 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:11.936 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.936 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:11.936 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.936 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.936 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.936 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:11.936 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.936 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:11.936 ++ SUPPORT_END=2024-11-12 00:01:11.936 ++ VARIANT='Cloud Edition' 00:01:11.936 ++ VARIANT_ID=cloud 00:01:11.936 + uname -a 00:01:11.936 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:11.936 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.472 Hugepages 00:01:14.472 node hugesize free / total 00:01:14.472 node0 1048576kB 0 / 0 00:01:14.472 node0 2048kB 0 / 0 00:01:14.472 node1 1048576kB 0 / 0 00:01:14.472 node1 2048kB 0 / 0 00:01:14.472 00:01:14.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.472 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:14.472 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:14.472 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:14.472 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:14.472 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:14.472 + rm -f /tmp/spdk-ld-path 00:01:14.472 + source autorun-spdk.conf 00:01:14.472 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.472 ++ SPDK_TEST_NVMF=1 00:01:14.472 ++ SPDK_TEST_NVME_CLI=1 00:01:14.472 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.472 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.472 ++ SPDK_TEST_VFIOUSER=1 00:01:14.472 ++ SPDK_RUN_UBSAN=1 00:01:14.472 ++ NET_TYPE=phy 00:01:14.472 ++ RUN_NIGHTLY=0 00:01:14.472 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.472 + [[ -n '' ]] 00:01:14.472 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.472 + for M in /var/spdk/build-*-manifest.txt 00:01:14.472 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:14.472 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.472 + for M in /var/spdk/build-*-manifest.txt 00:01:14.472 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.472 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.472 + for M in /var/spdk/build-*-manifest.txt 00:01:14.472 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.472 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.731 ++ uname 00:01:14.731 + [[ Linux == \L\i\n\u\x ]] 00:01:14.731 + sudo dmesg -T 00:01:14.731 + sudo dmesg --clear 00:01:14.731 + dmesg_pid=1886405 00:01:14.731 + [[ Fedora Linux == FreeBSD ]] 00:01:14.731 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.731 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.731 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.731 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.731 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.731 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.731 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.731 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.731 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.731 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.731 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.731 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.731 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.731 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.731 + sudo dmesg -Tw 00:01:14.731 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.731 15:10:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:14.731 15:10:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:14.731 15:10:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:14.731 15:10:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:14.731 15:10:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.731 15:10:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:14.731 15:10:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.731 15:10:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:14.731 15:10:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.731 15:10:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.731 15:10:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.731 15:10:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.731 15:10:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.731 15:10:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.731 15:10:18 -- paths/export.sh@5 -- $ export PATH 00:01:14.731 15:10:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.731 15:10:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.731 15:10:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:14.732 15:10:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732111818.XXXXXX 00:01:14.732 15:10:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732111818.C4niuI 00:01:14.732 15:10:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:14.732 15:10:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:14.732 15:10:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.732 15:10:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.732 15:10:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.732 15:10:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:14.732 15:10:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:14.732 15:10:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.732 15:10:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.732 15:10:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:14.732 15:10:18 -- pm/common@17 -- $ local monitor 00:01:14.732 15:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.732 15:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.732 15:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.732 15:10:18 -- pm/common@21 -- $ date +%s 00:01:14.732 15:10:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.732 15:10:18 -- pm/common@21 -- $ date +%s 00:01:14.732 15:10:18 -- pm/common@25 -- $ sleep 1 00:01:14.732 15:10:18 -- pm/common@21 -- $ date +%s 00:01:14.732 15:10:18 -- pm/common@21 -- $ date +%s 00:01:14.732 15:10:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111818 00:01:14.732 15:10:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111818 00:01:14.732 15:10:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111818 00:01:14.732 15:10:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732111818 00:01:14.991 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111818_collect-vmstat.pm.log 00:01:14.991 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111818_collect-cpu-load.pm.log 00:01:14.991 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111818_collect-cpu-temp.pm.log 00:01:14.991 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732111818_collect-bmc-pm.bmc.pm.log 00:01:15.928 15:10:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:15.928 15:10:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.928 15:10:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.928 15:10:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.928 15:10:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.928 Wed Nov 20 02:10:19 PM UTC 2024 00:01:15.928 15:10:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.928 v25.01-pre-221-gede20dc4e 00:01:15.928 15:10:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.928 15:10:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.928 15:10:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.928 15:10:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:15.928 15:10:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:15.928 15:10:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.928 ************************************ 00:01:15.928 START TEST ubsan 00:01:15.928 ************************************ 00:01:15.928 15:10:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:15.928 using ubsan 00:01:15.928 00:01:15.928 real 0m0.000s 00:01:15.928 user 0m0.000s 00:01:15.928 sys 0m0.000s 00:01:15.928 15:10:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:15.928 15:10:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.928 ************************************ 00:01:15.928 END TEST ubsan 00:01:15.928 ************************************ 00:01:15.928 15:10:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.928 15:10:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.928 15:10:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.928 15:10:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.187 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.187 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.445 Using 'verbs' RDMA provider 00:01:29.593 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.801 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.801 Creating mk/config.mk...done. 00:01:41.801 Creating mk/cc.flags.mk...done. 00:01:41.801 Type 'make' to build. 00:01:41.801 15:10:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:41.801 15:10:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:41.801 15:10:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:41.801 15:10:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.801 ************************************ 00:01:41.801 START TEST make 00:01:41.801 ************************************ 00:01:41.802 15:10:45 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:42.058 make[1]: Nothing to be done for 'all'. 00:01:43.438 The Meson build system 00:01:43.438 Version: 1.5.0 00:01:43.438 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.438 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.438 Build type: native build 00:01:43.438 Project name: libvfio-user 00:01:43.438 Project version: 0.0.1 00:01:43.438 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.438 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.438 Host machine cpu family: x86_64 00:01:43.438 Host machine cpu: x86_64 00:01:43.438 Run-time dependency threads found: YES 00:01:43.438 Library dl found: YES 00:01:43.438 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.438 Run-time dependency json-c found: YES 0.17 00:01:43.438 Run-time dependency cmocka found: YES 1.1.7 00:01:43.438 Program pytest-3 found: NO 00:01:43.438 Program flake8 found: NO 00:01:43.438 Program misspell-fixer found: NO 00:01:43.438 Program restructuredtext-lint found: NO 00:01:43.438 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.438 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.438 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.438 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.438 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.438 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.438 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.438 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.438 Build targets in project: 8 00:01:43.438 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.438 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.438 00:01:43.438 libvfio-user 0.0.1 00:01:43.438 00:01:43.438 User defined options 00:01:43.438 buildtype : debug 00:01:43.438 default_library: shared 00:01:43.438 libdir : /usr/local/lib 00:01:43.438 00:01:43.438 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.696 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.955 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.955 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.955 [3/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.956 [4/37] Compiling C object samples/null.p/null.c.o 00:01:43.956 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.956 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.956 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.956 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.956 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.956 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.956 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.956 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.956 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.956 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.956 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.956 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.956 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.956 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.956 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.956 [20/37] Compiling C object samples/server.p/server.c.o 00:01:43.956 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.956 [22/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.956 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.956 [24/37] Compiling C object samples/client.p/client.c.o 00:01:43.956 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.956 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.956 [27/37] Linking target samples/client 00:01:44.215 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.215 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.215 [30/37] Linking target test/unit_tests 00:01:44.215 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.215 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.215 [33/37] Linking target samples/lspci 00:01:44.215 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:44.215 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.215 [36/37] Linking target samples/null 00:01:44.215 [37/37] Linking target samples/server 00:01:44.474 INFO: autodetecting backend as ninja 00:01:44.474 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.474 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.733 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.733 ninja: no work to do. 00:01:50.009 The Meson build system 00:01:50.009 Version: 1.5.0 00:01:50.009 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.009 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.009 Build type: native build 00:01:50.009 Program cat found: YES (/usr/bin/cat) 00:01:50.009 Project name: DPDK 00:01:50.009 Project version: 24.03.0 00:01:50.009 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.009 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.009 Host machine cpu family: x86_64 00:01:50.009 Host machine cpu: x86_64 00:01:50.009 Message: ## Building in Developer Mode ## 00:01:50.009 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.009 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.009 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.009 Program python3 found: YES (/usr/bin/python3) 00:01:50.009 Program cat found: YES (/usr/bin/cat) 00:01:50.009 Compiler for C supports arguments -march=native: YES 00:01:50.009 Checking for size of "void *" : 8 00:01:50.009 Checking for size of "void *" : 8 (cached) 00:01:50.009 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:50.009 Library m found: YES 00:01:50.009 Library numa found: YES 00:01:50.009 Has header "numaif.h" : YES 00:01:50.009 Library fdt found: NO 00:01:50.009 Library execinfo found: NO 00:01:50.009 Has header "execinfo.h" : YES 00:01:50.009 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.009 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.009 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.009 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.009 Run-time dependency openssl found: YES 3.1.1 00:01:50.009 Run-time dependency libpcap found: YES 1.10.4 00:01:50.009 Has header "pcap.h" with dependency libpcap: YES 00:01:50.009 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.009 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.009 Compiler for C supports arguments -Wformat: YES 00:01:50.009 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.009 Compiler for C supports arguments -Wformat-security: NO 00:01:50.009 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.009 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.009 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.009 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.009 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.009 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.009 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.009 Compiler for C supports arguments -Wundef: YES 00:01:50.009 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.009 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.009 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.009 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.009 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.009 Program objdump found: YES (/usr/bin/objdump) 00:01:50.009 Compiler for C supports arguments -mavx512f: YES 00:01:50.009 Checking if "AVX512 checking" compiles: YES 00:01:50.009 Fetching value of define "__SSE4_2__" : 1 00:01:50.009 Fetching value of define "__AES__" : 1 00:01:50.009 Fetching value of define "__AVX__" : 1 00:01:50.009 Fetching value of define "__AVX2__" : 1 00:01:50.009 Fetching value of define "__AVX512BW__" : 1 00:01:50.009 Fetching value of define "__AVX512CD__" : 1 00:01:50.009 Fetching value of define "__AVX512DQ__" : 1 00:01:50.009 Fetching value of define "__AVX512F__" : 1 00:01:50.009 Fetching value of define "__AVX512VL__" : 1 00:01:50.009 Fetching value of define "__PCLMUL__" : 1 00:01:50.009 Fetching value of define "__RDRND__" : 1 00:01:50.009 Fetching value of define "__RDSEED__" : 1 00:01:50.009 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.009 Fetching value of define "__znver1__" : (undefined) 00:01:50.009 Fetching value of define "__znver2__" : (undefined) 00:01:50.009 Fetching value of define "__znver3__" : (undefined) 00:01:50.009 Fetching value of define "__znver4__" : (undefined) 00:01:50.009 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.009 Message: lib/log: Defining dependency "log" 00:01:50.009 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.009 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.009 Checking for function "getentropy" : NO 00:01:50.009 Message: lib/eal: Defining dependency "eal" 00:01:50.009 Message: lib/ring: Defining dependency "ring" 00:01:50.009 Message: lib/rcu: Defining dependency "rcu" 00:01:50.009 Message: lib/mempool: Defining dependency "mempool" 00:01:50.009 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.009 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.009 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.009 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.009 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.009 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.009 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:50.009 Compiler for C supports arguments -mpclmul: YES 00:01:50.009 Compiler for C supports arguments -maes: YES 00:01:50.009 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.009 Compiler for C supports arguments -mavx512bw: YES 00:01:50.009 Compiler for C supports arguments -mavx512dq: YES 00:01:50.009 Compiler for C supports arguments -mavx512vl: YES 00:01:50.009 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.009 Compiler for C supports arguments -mavx2: YES 00:01:50.009 Compiler for C supports arguments -mavx: YES 00:01:50.009 Message: lib/net: Defining dependency "net" 00:01:50.009 Message: lib/meter: Defining dependency "meter" 00:01:50.009 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.009 Message: lib/pci: Defining dependency "pci" 00:01:50.009 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.009 Message: lib/hash: Defining dependency "hash" 00:01:50.009 Message: lib/timer: Defining dependency "timer" 00:01:50.009 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.009 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.009 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.009 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.009 Message: lib/power: Defining dependency "power" 00:01:50.009 Message: lib/reorder: Defining dependency "reorder" 00:01:50.009 Message: lib/security: Defining dependency "security" 00:01:50.009 Has header "linux/userfaultfd.h" : YES 00:01:50.009 Has header "linux/vduse.h" : YES 00:01:50.009 Message: lib/vhost: Defining dependency "vhost" 00:01:50.009 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.009 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.009 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.009 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.009 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.009 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.009 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.009 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.009 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.009 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.009 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.009 Configuring doxy-api-html.conf using configuration 00:01:50.009 Configuring doxy-api-man.conf using configuration 00:01:50.009 Program mandb found: YES (/usr/bin/mandb) 00:01:50.009 Program sphinx-build found: NO 00:01:50.009 Configuring rte_build_config.h using configuration 00:01:50.009 Message: 00:01:50.009 ================= 00:01:50.009 Applications Enabled 00:01:50.009 ================= 00:01:50.009 00:01:50.009 apps: 00:01:50.009 00:01:50.009 00:01:50.009 Message: 00:01:50.010 ================= 00:01:50.010 Libraries Enabled 00:01:50.010 ================= 00:01:50.010 00:01:50.010 libs: 00:01:50.010 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.010 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.010 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.010 00:01:50.010 Message: 00:01:50.010 =============== 00:01:50.010 Drivers Enabled 00:01:50.010 =============== 00:01:50.010 00:01:50.010 common: 00:01:50.010 00:01:50.010 bus: 00:01:50.010 pci, vdev, 00:01:50.010 mempool: 00:01:50.010 ring, 00:01:50.010 dma: 00:01:50.010 00:01:50.010 net: 00:01:50.010 00:01:50.010 crypto: 00:01:50.010 00:01:50.010 compress: 00:01:50.010 00:01:50.010 vdpa: 00:01:50.010 00:01:50.010 00:01:50.010 Message: 00:01:50.010 ================= 00:01:50.010 Content Skipped 00:01:50.010 ================= 00:01:50.010 00:01:50.010 apps: 00:01:50.010 dumpcap: explicitly disabled via build config 00:01:50.010 graph: explicitly disabled via build config 00:01:50.010 pdump: explicitly disabled via build config 00:01:50.010 proc-info: explicitly disabled via build config 00:01:50.010 test-acl: explicitly disabled via build config 00:01:50.010 test-bbdev: explicitly disabled via build config 00:01:50.010 test-cmdline: explicitly disabled via build config 00:01:50.010 test-compress-perf: explicitly disabled via build config 00:01:50.010 test-crypto-perf: explicitly disabled via build config 00:01:50.010 test-dma-perf: explicitly disabled via build config 00:01:50.010 test-eventdev: explicitly disabled via build config 00:01:50.010 test-fib: explicitly disabled via build config 00:01:50.010 test-flow-perf: explicitly disabled via build config 00:01:50.010 test-gpudev: explicitly disabled via build config 00:01:50.010 test-mldev: explicitly disabled via build config 00:01:50.010 test-pipeline: explicitly disabled via build config 00:01:50.010 test-pmd: explicitly disabled via build config 00:01:50.010 test-regex: explicitly disabled via build config 00:01:50.010 test-sad: explicitly disabled via build config 00:01:50.010 test-security-perf: explicitly disabled via build config 00:01:50.010 00:01:50.010 libs: 00:01:50.010 argparse: explicitly disabled via build config 00:01:50.010 metrics: explicitly disabled via build config 00:01:50.010 acl: explicitly disabled via build config 00:01:50.010 bbdev: explicitly disabled via build config 00:01:50.010 bitratestats: explicitly disabled via build config 00:01:50.010 bpf: explicitly disabled via build config 00:01:50.010 cfgfile: explicitly disabled via build config 00:01:50.010 distributor: explicitly disabled via build config 00:01:50.010 efd: explicitly disabled via build config 00:01:50.010 eventdev: explicitly disabled via build config 00:01:50.010 dispatcher: explicitly disabled via build config 00:01:50.010 gpudev: explicitly disabled via build config 00:01:50.010 gro: explicitly disabled via build config 00:01:50.010 gso: explicitly disabled via build config 00:01:50.010 ip_frag: explicitly disabled via build config 00:01:50.010 jobstats: explicitly disabled via build config 00:01:50.010 latencystats: explicitly disabled via build config 00:01:50.010 lpm: explicitly disabled via build config 00:01:50.010 member: explicitly disabled via build config 00:01:50.010 pcapng: explicitly disabled via build config 00:01:50.010 rawdev: explicitly disabled via build config 00:01:50.010 regexdev: explicitly disabled via build config 00:01:50.010 mldev: explicitly disabled via build config 00:01:50.010 rib: explicitly disabled via build config 00:01:50.010 sched: explicitly disabled via build config 00:01:50.010 stack: explicitly disabled via build config 00:01:50.010 ipsec: explicitly disabled via build config 00:01:50.010 pdcp: explicitly disabled via build config 00:01:50.010 fib: explicitly disabled via build config 00:01:50.010 port: explicitly disabled via build config 00:01:50.010 pdump: explicitly disabled via build config 00:01:50.010 table: explicitly disabled via build config 00:01:50.010 pipeline: explicitly disabled via build config 00:01:50.010 graph: explicitly disabled via build config 00:01:50.010 node: explicitly disabled via build config 00:01:50.010 00:01:50.010 drivers: 00:01:50.010 common/cpt: not in enabled drivers build config 00:01:50.010 common/dpaax: not in enabled drivers build config 00:01:50.010 common/iavf: not in enabled drivers build config 00:01:50.010 common/idpf: not in enabled drivers build config 00:01:50.010 common/ionic: not in enabled drivers build config 00:01:50.010 common/mvep: not in enabled drivers build config 00:01:50.010 common/octeontx: not in enabled drivers build config 00:01:50.010 bus/auxiliary: not in enabled drivers build config 00:01:50.010 bus/cdx: not in enabled drivers build config 00:01:50.010 bus/dpaa: not in enabled drivers build config 00:01:50.010 bus/fslmc: not in enabled drivers build config 00:01:50.010 bus/ifpga: not in enabled drivers build config 00:01:50.010 bus/platform: not in enabled drivers build config 00:01:50.010 bus/uacce: not in enabled drivers build config 00:01:50.010 bus/vmbus: not in enabled drivers build config 00:01:50.010 common/cnxk: not in enabled drivers build config 00:01:50.010 common/mlx5: not in enabled drivers build config 00:01:50.010 common/nfp: not in enabled drivers build config 00:01:50.010 common/nitrox: not in enabled drivers build config 00:01:50.010 common/qat: not in enabled drivers build config 00:01:50.010 common/sfc_efx: not in enabled drivers build config 00:01:50.010 mempool/bucket: not in enabled drivers build config 00:01:50.010 mempool/cnxk: not in enabled drivers build config 00:01:50.010 mempool/dpaa: not in enabled drivers build config 00:01:50.010 mempool/dpaa2: not in enabled drivers build config 00:01:50.010 mempool/octeontx: not in enabled drivers build config 00:01:50.010 mempool/stack: not in enabled drivers build config 00:01:50.010 dma/cnxk: not in enabled drivers build config 00:01:50.010 dma/dpaa: not in enabled drivers build config 00:01:50.010 dma/dpaa2: not in enabled drivers build config 00:01:50.010 dma/hisilicon: not in enabled drivers build config 00:01:50.010 dma/idxd: not in enabled drivers build config 00:01:50.010 dma/ioat: not in enabled drivers build config 00:01:50.010 dma/skeleton: not in enabled drivers build config 00:01:50.010 net/af_packet: not in enabled drivers build config 00:01:50.010 net/af_xdp: not in enabled drivers build config 00:01:50.010 net/ark: not in enabled drivers build config 00:01:50.010 net/atlantic: not in enabled drivers build config 00:01:50.010 net/avp: not in enabled drivers build config 00:01:50.010 net/axgbe: not in enabled drivers build config 00:01:50.010 net/bnx2x: not in enabled drivers build config 00:01:50.010 net/bnxt: not in enabled drivers build config 00:01:50.010 net/bonding: not in enabled drivers build config 00:01:50.010 net/cnxk: not in enabled drivers build config 00:01:50.010 net/cpfl: not in enabled drivers build config 00:01:50.010 net/cxgbe: not in enabled drivers build config 00:01:50.010 net/dpaa: not in enabled drivers build config 00:01:50.010 net/dpaa2: not in enabled drivers build config 00:01:50.010 net/e1000: not in enabled drivers build config 00:01:50.010 net/ena: not in enabled drivers build config 00:01:50.010 net/enetc: not in enabled drivers build config 00:01:50.010 net/enetfec: not in enabled drivers build config 00:01:50.010 net/enic: not in enabled drivers build config 00:01:50.010 net/failsafe: not in enabled drivers build config 00:01:50.010 net/fm10k: not in enabled drivers build config 00:01:50.010 net/gve: not in enabled drivers build config 00:01:50.010 net/hinic: not in enabled drivers build config 00:01:50.010 net/hns3: not in enabled drivers build config 00:01:50.010 net/i40e: not in enabled drivers build config 00:01:50.010 net/iavf: not in enabled drivers build config 00:01:50.010 net/ice: not in enabled drivers build config 00:01:50.010 net/idpf: not in enabled drivers build config 00:01:50.010 net/igc: not in enabled drivers build config 00:01:50.010 net/ionic: not in enabled drivers build config 00:01:50.010 net/ipn3ke: not in enabled drivers build config 00:01:50.010 net/ixgbe: not in enabled drivers build config 00:01:50.010 net/mana: not in enabled drivers build config 00:01:50.010 net/memif: not in enabled drivers build config 00:01:50.010 net/mlx4: not in enabled drivers build config 00:01:50.010 net/mlx5: not in enabled drivers build config 00:01:50.010 net/mvneta: not in enabled drivers build config 00:01:50.010 net/mvpp2: not in enabled drivers build config 00:01:50.010 net/netvsc: not in enabled drivers build config 00:01:50.010 net/nfb: not in enabled drivers build config 00:01:50.010 net/nfp: not in enabled drivers build config 00:01:50.010 net/ngbe: not in enabled drivers build config 00:01:50.010 net/null: not in enabled drivers build config 00:01:50.010 net/octeontx: not in enabled drivers build config 00:01:50.010 net/octeon_ep: not in enabled drivers build config 00:01:50.010 net/pcap: not in enabled drivers build config 00:01:50.010 net/pfe: not in enabled drivers build config 00:01:50.010 net/qede: not in enabled drivers build config 00:01:50.010 net/ring: not in enabled drivers build config 00:01:50.010 net/sfc: not in enabled drivers build config 00:01:50.010 net/softnic: not in enabled drivers build config 00:01:50.010 net/tap: not in enabled drivers build config 00:01:50.010 net/thunderx: not in enabled drivers build config 00:01:50.010 net/txgbe: not in enabled drivers build config 00:01:50.010 net/vdev_netvsc: not in enabled drivers build config 00:01:50.010 net/vhost: not in enabled drivers build config 00:01:50.010 net/virtio: not in enabled drivers build config 00:01:50.010 net/vmxnet3: not in enabled drivers build config 00:01:50.010 raw/*: missing internal dependency, "rawdev" 00:01:50.010 crypto/armv8: not in enabled drivers build config 00:01:50.010 crypto/bcmfs: not in enabled drivers build config 00:01:50.010 crypto/caam_jr: not in enabled drivers build config 00:01:50.010 crypto/ccp: not in enabled drivers build config 00:01:50.010 crypto/cnxk: not in enabled drivers build config 00:01:50.010 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.010 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.010 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.010 crypto/mlx5: not in enabled drivers build config 00:01:50.010 crypto/mvsam: not in enabled drivers build config 00:01:50.010 crypto/nitrox: not in enabled drivers build config 00:01:50.011 crypto/null: not in enabled drivers build config 00:01:50.011 crypto/octeontx: not in enabled drivers build config 00:01:50.011 crypto/openssl: not in enabled drivers build config 00:01:50.011 crypto/scheduler: not in enabled drivers build config 00:01:50.011 crypto/uadk: not in enabled drivers build config 00:01:50.011 crypto/virtio: not in enabled drivers build config 00:01:50.011 compress/isal: not in enabled drivers build config 00:01:50.011 compress/mlx5: not in enabled drivers build config 00:01:50.011 compress/nitrox: not in enabled drivers build config 00:01:50.011 compress/octeontx: not in enabled drivers build config 00:01:50.011 compress/zlib: not in enabled drivers build config 00:01:50.011 regex/*: missing internal dependency, "regexdev" 00:01:50.011 ml/*: missing internal dependency, "mldev" 00:01:50.011 vdpa/ifc: not in enabled drivers build config 00:01:50.011 vdpa/mlx5: not in enabled drivers build config 00:01:50.011 vdpa/nfp: not in enabled drivers build config 00:01:50.011 vdpa/sfc: not in enabled drivers build config 00:01:50.011 event/*: missing internal dependency, "eventdev" 00:01:50.011 baseband/*: missing internal dependency, "bbdev" 00:01:50.011 gpu/*: missing internal dependency, "gpudev" 00:01:50.011 00:01:50.011 00:01:50.011 Build targets in project: 85 00:01:50.011 00:01:50.011 DPDK 24.03.0 00:01:50.011 00:01:50.011 User defined options 00:01:50.011 buildtype : debug 00:01:50.011 default_library : shared 00:01:50.011 libdir : lib 00:01:50.011 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.011 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.011 c_link_args : 00:01:50.011 cpu_instruction_set: native 00:01:50.011 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:50.011 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:50.011 enable_docs : false 00:01:50.011 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:50.011 enable_kmods : false 00:01:50.011 max_lcores : 128 00:01:50.011 tests : false 00:01:50.011 00:01:50.011 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.587 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.587 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.587 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.587 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.587 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.587 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.587 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.587 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.587 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.587 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.587 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.587 [11/268] Linking static target lib/librte_kvargs.a 00:01:50.587 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.587 [13/268] Linking static target lib/librte_log.a 00:01:50.587 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.587 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.587 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.587 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.846 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.846 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.846 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.846 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.846 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.846 [23/268] Linking static target lib/librte_pci.a 00:01:50.846 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.104 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.104 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.104 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.104 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.104 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.104 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.104 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.104 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.104 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.104 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.104 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.104 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.104 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.104 [38/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.104 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.104 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.104 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.104 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.104 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.104 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.104 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.104 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.104 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.104 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.104 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.104 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.104 [51/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.104 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.104 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.104 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.104 [55/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.104 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.104 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.104 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.104 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.104 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.104 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.104 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.104 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.104 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.104 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.104 [66/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.104 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.104 [68/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.104 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.104 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.104 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.104 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.104 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.104 [74/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.104 [75/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.104 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.104 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.104 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.104 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.104 [80/268] Linking static target lib/librte_ring.a 00:01:51.105 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.105 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.105 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.105 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.105 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.105 [86/268] Linking static target lib/librte_meter.a 00:01:51.105 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.105 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.105 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.363 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.363 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.363 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.363 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.363 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.363 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.363 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.363 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.363 [98/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.363 [99/268] Linking static target lib/librte_telemetry.a 00:01:51.363 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.363 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.363 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.363 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.363 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.363 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.363 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.363 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.363 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.363 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.363 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.363 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.363 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.363 [113/268] Linking static target lib/librte_mempool.a 00:01:51.363 [114/268] Linking static target lib/librte_net.a 00:01:51.363 [115/268] Linking static target lib/librte_rcu.a 00:01:51.363 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.363 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.363 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.363 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.363 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.363 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.363 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.363 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.363 [124/268] Linking static target lib/librte_eal.a 00:01:51.363 [125/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.363 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.363 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.363 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.363 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.363 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.363 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.363 [132/268] Linking static target lib/librte_cmdline.a 00:01:51.363 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.363 [134/268] Linking static target lib/librte_mbuf.a 00:01:51.363 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.363 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.623 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.623 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.623 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.623 [140/268] Linking target lib/librte_log.so.24.1 00:01:51.623 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.623 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.623 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.623 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.623 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.623 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.623 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.623 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.623 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.623 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.623 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.623 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.623 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:51.623 [154/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.623 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.623 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.623 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.623 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.623 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.623 [160/268] Linking static target lib/librte_compressdev.a 00:01:51.623 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.623 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:51.623 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.623 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:51.623 [165/268] Linking static target lib/librte_power.a 00:01:51.623 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.623 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:51.623 [168/268] Linking target lib/librte_kvargs.so.24.1 00:01:51.623 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:51.623 [170/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.623 [171/268] Linking static target lib/librte_timer.a 00:01:51.623 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.623 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:51.623 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.882 [175/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.882 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.882 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:51.882 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.882 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.882 [180/268] Linking target lib/librte_telemetry.so.24.1 00:01:51.882 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.882 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.882 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:51.882 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:51.882 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.882 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:51.882 [187/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.882 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:51.882 [189/268] Linking static target lib/librte_dmadev.a 00:01:51.882 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:51.882 [191/268] Linking static target lib/librte_reorder.a 00:01:51.882 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.882 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:51.882 [194/268] Linking static target lib/librte_security.a 00:01:51.882 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.141 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.141 [197/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.141 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.141 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.141 [200/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.141 [201/268] Linking static target lib/librte_hash.a 00:01:52.141 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.141 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.141 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.141 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.141 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.141 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:52.141 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:52.141 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.141 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.141 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:52.141 [212/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.141 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.141 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.141 [215/268] Linking static target lib/librte_cryptodev.a 00:01:52.401 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.401 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.401 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.401 [219/268] Linking static target lib/librte_ethdev.a 00:01:52.401 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.401 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.659 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.659 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.659 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.659 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.918 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.918 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.486 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.486 [229/268] Linking static target lib/librte_vhost.a 00:01:54.054 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.433 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.708 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.276 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.534 [234/268] Linking target lib/librte_eal.so.24.1 00:02:01.534 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.534 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.534 [237/268] Linking target lib/librte_meter.so.24.1 00:02:01.534 [238/268] Linking target lib/librte_ring.so.24.1 00:02:01.534 [239/268] Linking target lib/librte_timer.so.24.1 00:02:01.534 [240/268] Linking target lib/librte_pci.so.24.1 00:02:01.534 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.792 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.792 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.792 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.792 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.792 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.792 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:01.792 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:01.792 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.051 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.051 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.051 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.051 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:02.051 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:02.308 [255/268] Linking target lib/librte_net.so.24.1 00:02:02.308 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:02.308 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:02.308 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:02.308 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:02.308 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:02.308 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:02.308 [262/268] Linking target lib/librte_hash.so.24.1 00:02:02.308 [263/268] Linking target lib/librte_security.so.24.1 00:02:02.308 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.568 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.568 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.568 [267/268] Linking target lib/librte_power.so.24.1 00:02:02.568 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:02.568 INFO: autodetecting backend as ninja 00:02:02.568 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:12.549 CC lib/ut/ut.o 00:02:12.549 CC lib/log/log.o 00:02:12.549 CC lib/log/log_deprecated.o 00:02:12.549 CC lib/log/log_flags.o 00:02:12.549 CC lib/ut_mock/mock.o 00:02:12.549 LIB libspdk_ut_mock.a 00:02:12.809 LIB libspdk_ut.a 00:02:12.809 LIB libspdk_log.a 00:02:12.809 SO libspdk_ut_mock.so.6.0 00:02:12.809 SO libspdk_ut.so.2.0 00:02:12.809 SO libspdk_log.so.7.1 00:02:12.809 SYMLINK libspdk_ut_mock.so 00:02:12.809 SYMLINK libspdk_ut.so 00:02:12.809 SYMLINK libspdk_log.so 00:02:13.069 CC lib/dma/dma.o 00:02:13.069 CC lib/util/base64.o 00:02:13.069 CC lib/util/bit_array.o 00:02:13.069 CC lib/util/cpuset.o 00:02:13.069 CC lib/util/crc16.o 00:02:13.069 CC lib/util/crc32.o 00:02:13.069 CC lib/util/crc32c.o 00:02:13.069 CXX lib/trace_parser/trace.o 00:02:13.069 CC lib/ioat/ioat.o 00:02:13.069 CC lib/util/crc32_ieee.o 00:02:13.069 CC lib/util/crc64.o 00:02:13.069 CC lib/util/dif.o 00:02:13.069 CC lib/util/fd.o 00:02:13.069 CC lib/util/fd_group.o 00:02:13.069 CC lib/util/file.o 00:02:13.069 CC lib/util/hexlify.o 00:02:13.069 CC lib/util/iov.o 00:02:13.069 CC lib/util/math.o 00:02:13.069 CC lib/util/net.o 00:02:13.069 CC lib/util/pipe.o 00:02:13.069 CC lib/util/strerror_tls.o 00:02:13.069 CC lib/util/string.o 00:02:13.069 CC lib/util/uuid.o 00:02:13.069 CC lib/util/xor.o 00:02:13.069 CC lib/util/zipf.o 00:02:13.069 CC lib/util/md5.o 00:02:13.328 CC lib/vfio_user/host/vfio_user_pci.o 00:02:13.328 CC lib/vfio_user/host/vfio_user.o 00:02:13.328 LIB libspdk_dma.a 00:02:13.328 SO libspdk_dma.so.5.0 00:02:13.328 SYMLINK libspdk_dma.so 00:02:13.328 LIB libspdk_ioat.a 00:02:13.328 SO libspdk_ioat.so.7.0 00:02:13.587 SYMLINK libspdk_ioat.so 00:02:13.587 LIB libspdk_vfio_user.a 00:02:13.587 SO libspdk_vfio_user.so.5.0 00:02:13.587 SYMLINK libspdk_vfio_user.so 00:02:13.587 LIB libspdk_util.a 00:02:13.587 SO libspdk_util.so.10.1 00:02:13.847 SYMLINK libspdk_util.so 00:02:13.847 LIB libspdk_trace_parser.a 00:02:13.847 SO libspdk_trace_parser.so.6.0 00:02:13.847 SYMLINK libspdk_trace_parser.so 00:02:14.106 CC lib/idxd/idxd.o 00:02:14.106 CC lib/idxd/idxd_user.o 00:02:14.106 CC lib/idxd/idxd_kernel.o 00:02:14.106 CC lib/rdma_utils/rdma_utils.o 00:02:14.106 CC lib/conf/conf.o 00:02:14.106 CC lib/vmd/vmd.o 00:02:14.106 CC lib/json/json_parse.o 00:02:14.106 CC lib/vmd/led.o 00:02:14.106 CC lib/env_dpdk/env.o 00:02:14.106 CC lib/json/json_util.o 00:02:14.106 CC lib/env_dpdk/memory.o 00:02:14.106 CC lib/json/json_write.o 00:02:14.106 CC lib/env_dpdk/pci.o 00:02:14.106 CC lib/env_dpdk/init.o 00:02:14.106 CC lib/env_dpdk/threads.o 00:02:14.106 CC lib/env_dpdk/pci_ioat.o 00:02:14.106 CC lib/env_dpdk/pci_virtio.o 00:02:14.106 CC lib/env_dpdk/pci_vmd.o 00:02:14.106 CC lib/env_dpdk/pci_idxd.o 00:02:14.106 CC lib/env_dpdk/pci_event.o 00:02:14.106 CC lib/env_dpdk/sigbus_handler.o 00:02:14.106 CC lib/env_dpdk/pci_dpdk.o 00:02:14.106 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:14.106 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:14.365 LIB libspdk_conf.a 00:02:14.365 LIB libspdk_rdma_utils.a 00:02:14.365 SO libspdk_conf.so.6.0 00:02:14.365 SO libspdk_rdma_utils.so.1.0 00:02:14.365 LIB libspdk_json.a 00:02:14.365 SO libspdk_json.so.6.0 00:02:14.365 SYMLINK libspdk_rdma_utils.so 00:02:14.365 SYMLINK libspdk_conf.so 00:02:14.365 SYMLINK libspdk_json.so 00:02:14.623 LIB libspdk_idxd.a 00:02:14.623 SO libspdk_idxd.so.12.1 00:02:14.623 LIB libspdk_vmd.a 00:02:14.623 SO libspdk_vmd.so.6.0 00:02:14.623 SYMLINK libspdk_idxd.so 00:02:14.623 CC lib/rdma_provider/common.o 00:02:14.623 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:14.623 SYMLINK libspdk_vmd.so 00:02:14.882 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.882 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.882 LIB libspdk_rdma_provider.a 00:02:14.882 SO libspdk_rdma_provider.so.7.0 00:02:14.882 LIB libspdk_jsonrpc.a 00:02:15.140 SYMLINK libspdk_rdma_provider.so 00:02:15.140 SO libspdk_jsonrpc.so.6.0 00:02:15.140 SYMLINK libspdk_jsonrpc.so 00:02:15.140 LIB libspdk_env_dpdk.a 00:02:15.140 SO libspdk_env_dpdk.so.15.1 00:02:15.398 SYMLINK libspdk_env_dpdk.so 00:02:15.398 CC lib/rpc/rpc.o 00:02:15.657 LIB libspdk_rpc.a 00:02:15.657 SO libspdk_rpc.so.6.0 00:02:15.657 SYMLINK libspdk_rpc.so 00:02:15.916 CC lib/trace/trace.o 00:02:15.916 CC lib/notify/notify.o 00:02:15.916 CC lib/trace/trace_flags.o 00:02:15.916 CC lib/notify/notify_rpc.o 00:02:15.916 CC lib/trace/trace_rpc.o 00:02:15.916 CC lib/keyring/keyring.o 00:02:15.916 CC lib/keyring/keyring_rpc.o 00:02:16.175 LIB libspdk_notify.a 00:02:16.175 SO libspdk_notify.so.6.0 00:02:16.175 LIB libspdk_keyring.a 00:02:16.175 LIB libspdk_trace.a 00:02:16.175 SO libspdk_keyring.so.2.0 00:02:16.175 SYMLINK libspdk_notify.so 00:02:16.175 SO libspdk_trace.so.11.0 00:02:16.175 SYMLINK libspdk_keyring.so 00:02:16.434 SYMLINK libspdk_trace.so 00:02:16.693 CC lib/sock/sock.o 00:02:16.693 CC lib/sock/sock_rpc.o 00:02:16.693 CC lib/thread/thread.o 00:02:16.693 CC lib/thread/iobuf.o 00:02:16.952 LIB libspdk_sock.a 00:02:16.952 SO libspdk_sock.so.10.0 00:02:16.952 SYMLINK libspdk_sock.so 00:02:17.519 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:17.519 CC lib/nvme/nvme_ctrlr.o 00:02:17.519 CC lib/nvme/nvme_fabric.o 00:02:17.519 CC lib/nvme/nvme_ns_cmd.o 00:02:17.519 CC lib/nvme/nvme_ns.o 00:02:17.519 CC lib/nvme/nvme_pcie_common.o 00:02:17.519 CC lib/nvme/nvme_pcie.o 00:02:17.519 CC lib/nvme/nvme_qpair.o 00:02:17.519 CC lib/nvme/nvme.o 00:02:17.519 CC lib/nvme/nvme_quirks.o 00:02:17.519 CC lib/nvme/nvme_transport.o 00:02:17.519 CC lib/nvme/nvme_discovery.o 00:02:17.519 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:17.519 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:17.519 CC lib/nvme/nvme_tcp.o 00:02:17.519 CC lib/nvme/nvme_opal.o 00:02:17.519 CC lib/nvme/nvme_poll_group.o 00:02:17.519 CC lib/nvme/nvme_io_msg.o 00:02:17.519 CC lib/nvme/nvme_zns.o 00:02:17.519 CC lib/nvme/nvme_stubs.o 00:02:17.519 CC lib/nvme/nvme_auth.o 00:02:17.519 CC lib/nvme/nvme_cuse.o 00:02:17.519 CC lib/nvme/nvme_vfio_user.o 00:02:17.519 CC lib/nvme/nvme_rdma.o 00:02:17.778 LIB libspdk_thread.a 00:02:17.778 SO libspdk_thread.so.11.0 00:02:17.778 SYMLINK libspdk_thread.so 00:02:18.037 CC lib/init/json_config.o 00:02:18.037 CC lib/init/subsystem.o 00:02:18.037 CC lib/blob/blobstore.o 00:02:18.037 CC lib/init/subsystem_rpc.o 00:02:18.037 CC lib/blob/request.o 00:02:18.037 CC lib/init/rpc.o 00:02:18.037 CC lib/blob/blob_bs_dev.o 00:02:18.037 CC lib/blob/zeroes.o 00:02:18.037 CC lib/accel/accel.o 00:02:18.037 CC lib/accel/accel_sw.o 00:02:18.037 CC lib/accel/accel_rpc.o 00:02:18.037 CC lib/vfu_tgt/tgt_endpoint.o 00:02:18.037 CC lib/vfu_tgt/tgt_rpc.o 00:02:18.037 CC lib/fsdev/fsdev.o 00:02:18.037 CC lib/fsdev/fsdev_io.o 00:02:18.037 CC lib/fsdev/fsdev_rpc.o 00:02:18.037 CC lib/virtio/virtio.o 00:02:18.037 CC lib/virtio/virtio_vhost_user.o 00:02:18.037 CC lib/virtio/virtio_vfio_user.o 00:02:18.037 CC lib/virtio/virtio_pci.o 00:02:18.296 LIB libspdk_init.a 00:02:18.296 SO libspdk_init.so.6.0 00:02:18.296 SYMLINK libspdk_init.so 00:02:18.555 LIB libspdk_virtio.a 00:02:18.555 LIB libspdk_vfu_tgt.a 00:02:18.555 SO libspdk_virtio.so.7.0 00:02:18.555 SO libspdk_vfu_tgt.so.3.0 00:02:18.555 SYMLINK libspdk_virtio.so 00:02:18.555 SYMLINK libspdk_vfu_tgt.so 00:02:18.555 LIB libspdk_fsdev.a 00:02:18.814 SO libspdk_fsdev.so.2.0 00:02:18.814 CC lib/event/app.o 00:02:18.814 CC lib/event/reactor.o 00:02:18.814 CC lib/event/log_rpc.o 00:02:18.814 CC lib/event/app_rpc.o 00:02:18.814 CC lib/event/scheduler_static.o 00:02:18.814 SYMLINK libspdk_fsdev.so 00:02:19.074 LIB libspdk_accel.a 00:02:19.074 SO libspdk_accel.so.16.0 00:02:19.074 LIB libspdk_nvme.a 00:02:19.074 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:19.074 LIB libspdk_event.a 00:02:19.074 SYMLINK libspdk_accel.so 00:02:19.074 SO libspdk_event.so.14.0 00:02:19.074 SO libspdk_nvme.so.15.0 00:02:19.074 SYMLINK libspdk_event.so 00:02:19.333 SYMLINK libspdk_nvme.so 00:02:19.333 CC lib/bdev/bdev.o 00:02:19.333 CC lib/bdev/bdev_rpc.o 00:02:19.333 CC lib/bdev/bdev_zone.o 00:02:19.333 CC lib/bdev/part.o 00:02:19.333 CC lib/bdev/scsi_nvme.o 00:02:19.592 LIB libspdk_fuse_dispatcher.a 00:02:19.592 SO libspdk_fuse_dispatcher.so.1.0 00:02:19.592 SYMLINK libspdk_fuse_dispatcher.so 00:02:20.161 LIB libspdk_blob.a 00:02:20.420 SO libspdk_blob.so.11.0 00:02:20.420 SYMLINK libspdk_blob.so 00:02:20.679 CC lib/blobfs/blobfs.o 00:02:20.679 CC lib/blobfs/tree.o 00:02:20.679 CC lib/lvol/lvol.o 00:02:21.247 LIB libspdk_bdev.a 00:02:21.247 LIB libspdk_blobfs.a 00:02:21.247 SO libspdk_bdev.so.17.0 00:02:21.247 SO libspdk_blobfs.so.10.0 00:02:21.247 LIB libspdk_lvol.a 00:02:21.247 SYMLINK libspdk_bdev.so 00:02:21.247 SO libspdk_lvol.so.10.0 00:02:21.506 SYMLINK libspdk_blobfs.so 00:02:21.506 SYMLINK libspdk_lvol.so 00:02:21.764 CC lib/nbd/nbd.o 00:02:21.764 CC lib/nbd/nbd_rpc.o 00:02:21.764 CC lib/nvmf/ctrlr.o 00:02:21.764 CC lib/nvmf/ctrlr_discovery.o 00:02:21.765 CC lib/ublk/ublk.o 00:02:21.765 CC lib/nvmf/ctrlr_bdev.o 00:02:21.765 CC lib/ftl/ftl_core.o 00:02:21.765 CC lib/scsi/dev.o 00:02:21.765 CC lib/ublk/ublk_rpc.o 00:02:21.765 CC lib/nvmf/subsystem.o 00:02:21.765 CC lib/ftl/ftl_init.o 00:02:21.765 CC lib/scsi/lun.o 00:02:21.765 CC lib/ftl/ftl_layout.o 00:02:21.765 CC lib/nvmf/nvmf.o 00:02:21.765 CC lib/scsi/port.o 00:02:21.765 CC lib/ftl/ftl_debug.o 00:02:21.765 CC lib/nvmf/nvmf_rpc.o 00:02:21.765 CC lib/scsi/scsi.o 00:02:21.765 CC lib/ftl/ftl_io.o 00:02:21.765 CC lib/nvmf/transport.o 00:02:21.765 CC lib/ftl/ftl_sb.o 00:02:21.765 CC lib/scsi/scsi_bdev.o 00:02:21.765 CC lib/ftl/ftl_l2p.o 00:02:21.765 CC lib/nvmf/tcp.o 00:02:21.765 CC lib/nvmf/stubs.o 00:02:21.765 CC lib/scsi/scsi_rpc.o 00:02:21.765 CC lib/scsi/scsi_pr.o 00:02:21.765 CC lib/ftl/ftl_l2p_flat.o 00:02:21.765 CC lib/nvmf/mdns_server.o 00:02:21.765 CC lib/ftl/ftl_nv_cache.o 00:02:21.765 CC lib/scsi/task.o 00:02:21.765 CC lib/ftl/ftl_band_ops.o 00:02:21.765 CC lib/ftl/ftl_band.o 00:02:21.765 CC lib/nvmf/vfio_user.o 00:02:21.765 CC lib/ftl/ftl_writer.o 00:02:21.765 CC lib/nvmf/rdma.o 00:02:21.765 CC lib/nvmf/auth.o 00:02:21.765 CC lib/ftl/ftl_rq.o 00:02:21.765 CC lib/ftl/ftl_reloc.o 00:02:21.765 CC lib/ftl/ftl_l2p_cache.o 00:02:21.765 CC lib/ftl/ftl_p2l.o 00:02:21.765 CC lib/ftl/ftl_p2l_log.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.765 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.765 CC lib/ftl/utils/ftl_conf.o 00:02:21.765 CC lib/ftl/utils/ftl_md.o 00:02:21.765 CC lib/ftl/utils/ftl_mempool.o 00:02:21.765 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.765 CC lib/ftl/utils/ftl_property.o 00:02:21.765 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.765 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.765 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.765 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.765 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.765 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.765 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:21.765 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.765 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.765 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.765 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:21.765 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.765 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:21.765 CC lib/ftl/base/ftl_base_dev.o 00:02:21.765 CC lib/ftl/base/ftl_base_bdev.o 00:02:21.765 CC lib/ftl/ftl_trace.o 00:02:22.330 LIB libspdk_nbd.a 00:02:22.330 SO libspdk_nbd.so.7.0 00:02:22.330 SYMLINK libspdk_nbd.so 00:02:22.330 LIB libspdk_scsi.a 00:02:22.330 SO libspdk_scsi.so.9.0 00:02:22.330 SYMLINK libspdk_scsi.so 00:02:22.330 LIB libspdk_ublk.a 00:02:22.330 SO libspdk_ublk.so.3.0 00:02:22.589 SYMLINK libspdk_ublk.so 00:02:22.589 LIB libspdk_ftl.a 00:02:22.589 CC lib/vhost/vhost.o 00:02:22.589 CC lib/iscsi/conn.o 00:02:22.589 CC lib/vhost/vhost_rpc.o 00:02:22.589 CC lib/iscsi/init_grp.o 00:02:22.589 CC lib/vhost/vhost_scsi.o 00:02:22.589 CC lib/iscsi/iscsi.o 00:02:22.589 CC lib/iscsi/param.o 00:02:22.589 CC lib/vhost/rte_vhost_user.o 00:02:22.589 CC lib/vhost/vhost_blk.o 00:02:22.589 CC lib/iscsi/portal_grp.o 00:02:22.589 CC lib/iscsi/tgt_node.o 00:02:22.589 CC lib/iscsi/iscsi_subsystem.o 00:02:22.589 CC lib/iscsi/iscsi_rpc.o 00:02:22.589 CC lib/iscsi/task.o 00:02:22.847 SO libspdk_ftl.so.9.0 00:02:23.106 SYMLINK libspdk_ftl.so 00:02:23.365 LIB libspdk_nvmf.a 00:02:23.365 SO libspdk_nvmf.so.20.0 00:02:23.624 LIB libspdk_vhost.a 00:02:23.624 SO libspdk_vhost.so.8.0 00:02:23.624 SYMLINK libspdk_nvmf.so 00:02:23.624 SYMLINK libspdk_vhost.so 00:02:23.624 LIB libspdk_iscsi.a 00:02:23.884 SO libspdk_iscsi.so.8.0 00:02:23.884 SYMLINK libspdk_iscsi.so 00:02:24.452 CC module/env_dpdk/env_dpdk_rpc.o 00:02:24.452 CC module/vfu_device/vfu_virtio.o 00:02:24.452 CC module/vfu_device/vfu_virtio_blk.o 00:02:24.452 CC module/vfu_device/vfu_virtio_rpc.o 00:02:24.452 CC module/vfu_device/vfu_virtio_scsi.o 00:02:24.452 CC module/vfu_device/vfu_virtio_fs.o 00:02:24.452 CC module/sock/posix/posix.o 00:02:24.452 CC module/keyring/file/keyring.o 00:02:24.452 CC module/keyring/file/keyring_rpc.o 00:02:24.452 CC module/accel/iaa/accel_iaa.o 00:02:24.452 CC module/accel/iaa/accel_iaa_rpc.o 00:02:24.452 LIB libspdk_env_dpdk_rpc.a 00:02:24.452 CC module/keyring/linux/keyring_rpc.o 00:02:24.452 CC module/keyring/linux/keyring.o 00:02:24.452 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:24.452 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:24.452 CC module/accel/dsa/accel_dsa.o 00:02:24.452 CC module/accel/ioat/accel_ioat.o 00:02:24.452 CC module/accel/ioat/accel_ioat_rpc.o 00:02:24.452 CC module/accel/dsa/accel_dsa_rpc.o 00:02:24.452 CC module/fsdev/aio/fsdev_aio.o 00:02:24.452 CC module/fsdev/aio/linux_aio_mgr.o 00:02:24.452 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:24.452 CC module/scheduler/gscheduler/gscheduler.o 00:02:24.452 CC module/accel/error/accel_error.o 00:02:24.452 CC module/accel/error/accel_error_rpc.o 00:02:24.710 CC module/blob/bdev/blob_bdev.o 00:02:24.710 SO libspdk_env_dpdk_rpc.so.6.0 00:02:24.710 SYMLINK libspdk_env_dpdk_rpc.so 00:02:24.710 LIB libspdk_keyring_file.a 00:02:24.710 SO libspdk_keyring_file.so.2.0 00:02:24.710 LIB libspdk_scheduler_dpdk_governor.a 00:02:24.710 LIB libspdk_keyring_linux.a 00:02:24.710 LIB libspdk_scheduler_gscheduler.a 00:02:24.710 LIB libspdk_accel_ioat.a 00:02:24.710 LIB libspdk_scheduler_dynamic.a 00:02:24.710 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:24.710 SO libspdk_scheduler_gscheduler.so.4.0 00:02:24.710 SO libspdk_keyring_linux.so.1.0 00:02:24.710 LIB libspdk_accel_iaa.a 00:02:24.710 SYMLINK libspdk_keyring_file.so 00:02:24.710 LIB libspdk_accel_error.a 00:02:24.710 SO libspdk_accel_ioat.so.6.0 00:02:24.710 SO libspdk_scheduler_dynamic.so.4.0 00:02:24.710 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:24.710 SO libspdk_accel_iaa.so.3.0 00:02:24.710 SO libspdk_accel_error.so.2.0 00:02:24.969 SYMLINK libspdk_scheduler_gscheduler.so 00:02:24.969 LIB libspdk_accel_dsa.a 00:02:24.969 SYMLINK libspdk_keyring_linux.so 00:02:24.969 LIB libspdk_blob_bdev.a 00:02:24.969 SYMLINK libspdk_accel_ioat.so 00:02:24.969 SYMLINK libspdk_scheduler_dynamic.so 00:02:24.969 SO libspdk_accel_dsa.so.5.0 00:02:24.969 SYMLINK libspdk_accel_iaa.so 00:02:24.969 SO libspdk_blob_bdev.so.11.0 00:02:24.969 SYMLINK libspdk_accel_error.so 00:02:24.969 SYMLINK libspdk_accel_dsa.so 00:02:24.969 LIB libspdk_vfu_device.a 00:02:24.969 SYMLINK libspdk_blob_bdev.so 00:02:24.969 SO libspdk_vfu_device.so.3.0 00:02:24.969 SYMLINK libspdk_vfu_device.so 00:02:25.227 LIB libspdk_fsdev_aio.a 00:02:25.227 LIB libspdk_sock_posix.a 00:02:25.227 SO libspdk_fsdev_aio.so.1.0 00:02:25.227 SO libspdk_sock_posix.so.6.0 00:02:25.227 SYMLINK libspdk_fsdev_aio.so 00:02:25.227 SYMLINK libspdk_sock_posix.so 00:02:25.485 CC module/bdev/delay/vbdev_delay.o 00:02:25.485 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:25.485 CC module/bdev/gpt/gpt.o 00:02:25.485 CC module/bdev/gpt/vbdev_gpt.o 00:02:25.485 CC module/bdev/error/vbdev_error.o 00:02:25.485 CC module/blobfs/bdev/blobfs_bdev.o 00:02:25.485 CC module/bdev/error/vbdev_error_rpc.o 00:02:25.485 CC module/bdev/malloc/bdev_malloc.o 00:02:25.485 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:25.485 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:25.485 CC module/bdev/nvme/bdev_nvme.o 00:02:25.485 CC module/bdev/nvme/nvme_rpc.o 00:02:25.485 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:25.485 CC module/bdev/null/bdev_null.o 00:02:25.485 CC module/bdev/null/bdev_null_rpc.o 00:02:25.485 CC module/bdev/nvme/bdev_mdns_client.o 00:02:25.485 CC module/bdev/nvme/vbdev_opal.o 00:02:25.485 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:25.485 CC module/bdev/iscsi/bdev_iscsi.o 00:02:25.485 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:25.485 CC module/bdev/aio/bdev_aio.o 00:02:25.485 CC module/bdev/aio/bdev_aio_rpc.o 00:02:25.485 CC module/bdev/lvol/vbdev_lvol.o 00:02:25.485 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:25.485 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:25.485 CC module/bdev/ftl/bdev_ftl.o 00:02:25.485 CC module/bdev/raid/bdev_raid.o 00:02:25.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:25.485 CC module/bdev/raid/bdev_raid_sb.o 00:02:25.485 CC module/bdev/raid/bdev_raid_rpc.o 00:02:25.485 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:25.485 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:25.485 CC module/bdev/raid/raid0.o 00:02:25.485 CC module/bdev/passthru/vbdev_passthru.o 00:02:25.485 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:25.485 CC module/bdev/raid/raid1.o 00:02:25.485 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:25.485 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:25.485 CC module/bdev/raid/concat.o 00:02:25.485 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:25.485 CC module/bdev/split/vbdev_split.o 00:02:25.485 CC module/bdev/split/vbdev_split_rpc.o 00:02:25.743 LIB libspdk_blobfs_bdev.a 00:02:25.743 SO libspdk_blobfs_bdev.so.6.0 00:02:25.743 LIB libspdk_bdev_split.a 00:02:25.743 LIB libspdk_bdev_error.a 00:02:25.743 LIB libspdk_bdev_null.a 00:02:25.743 SYMLINK libspdk_blobfs_bdev.so 00:02:25.743 SO libspdk_bdev_error.so.6.0 00:02:25.743 SO libspdk_bdev_split.so.6.0 00:02:25.743 SO libspdk_bdev_null.so.6.0 00:02:25.743 LIB libspdk_bdev_ftl.a 00:02:25.743 LIB libspdk_bdev_gpt.a 00:02:25.743 SO libspdk_bdev_ftl.so.6.0 00:02:25.743 LIB libspdk_bdev_malloc.a 00:02:25.743 SYMLINK libspdk_bdev_split.so 00:02:25.743 SYMLINK libspdk_bdev_null.so 00:02:25.743 SYMLINK libspdk_bdev_error.so 00:02:25.743 SO libspdk_bdev_gpt.so.6.0 00:02:25.743 LIB libspdk_bdev_passthru.a 00:02:25.743 LIB libspdk_bdev_delay.a 00:02:25.743 LIB libspdk_bdev_iscsi.a 00:02:25.743 SO libspdk_bdev_malloc.so.6.0 00:02:25.743 LIB libspdk_bdev_aio.a 00:02:25.743 SO libspdk_bdev_passthru.so.6.0 00:02:25.743 SO libspdk_bdev_delay.so.6.0 00:02:25.743 LIB libspdk_bdev_zone_block.a 00:02:25.743 SYMLINK libspdk_bdev_ftl.so 00:02:25.743 SYMLINK libspdk_bdev_gpt.so 00:02:25.743 SO libspdk_bdev_iscsi.so.6.0 00:02:25.743 SO libspdk_bdev_aio.so.6.0 00:02:25.743 SO libspdk_bdev_zone_block.so.6.0 00:02:26.002 SYMLINK libspdk_bdev_malloc.so 00:02:26.002 SYMLINK libspdk_bdev_passthru.so 00:02:26.002 SYMLINK libspdk_bdev_delay.so 00:02:26.002 SYMLINK libspdk_bdev_iscsi.so 00:02:26.002 SYMLINK libspdk_bdev_aio.so 00:02:26.002 SYMLINK libspdk_bdev_zone_block.so 00:02:26.002 LIB libspdk_bdev_virtio.a 00:02:26.002 LIB libspdk_bdev_lvol.a 00:02:26.002 SO libspdk_bdev_virtio.so.6.0 00:02:26.002 SO libspdk_bdev_lvol.so.6.0 00:02:26.002 SYMLINK libspdk_bdev_virtio.so 00:02:26.002 SYMLINK libspdk_bdev_lvol.so 00:02:26.260 LIB libspdk_bdev_raid.a 00:02:26.260 SO libspdk_bdev_raid.so.6.0 00:02:26.519 SYMLINK libspdk_bdev_raid.so 00:02:27.520 LIB libspdk_bdev_nvme.a 00:02:27.520 SO libspdk_bdev_nvme.so.7.1 00:02:27.520 SYMLINK libspdk_bdev_nvme.so 00:02:28.171 CC module/event/subsystems/vmd/vmd.o 00:02:28.171 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:28.171 CC module/event/subsystems/iobuf/iobuf.o 00:02:28.171 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:28.171 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:28.171 CC module/event/subsystems/keyring/keyring.o 00:02:28.171 CC module/event/subsystems/sock/sock.o 00:02:28.171 CC module/event/subsystems/fsdev/fsdev.o 00:02:28.171 CC module/event/subsystems/scheduler/scheduler.o 00:02:28.171 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:28.171 LIB libspdk_event_vfu_tgt.a 00:02:28.171 LIB libspdk_event_fsdev.a 00:02:28.171 LIB libspdk_event_vhost_blk.a 00:02:28.171 LIB libspdk_event_iobuf.a 00:02:28.171 LIB libspdk_event_vmd.a 00:02:28.171 LIB libspdk_event_scheduler.a 00:02:28.171 LIB libspdk_event_keyring.a 00:02:28.171 LIB libspdk_event_sock.a 00:02:28.171 SO libspdk_event_fsdev.so.1.0 00:02:28.171 SO libspdk_event_vfu_tgt.so.3.0 00:02:28.171 SO libspdk_event_scheduler.so.4.0 00:02:28.171 SO libspdk_event_vhost_blk.so.3.0 00:02:28.171 SO libspdk_event_iobuf.so.3.0 00:02:28.171 SO libspdk_event_vmd.so.6.0 00:02:28.171 SO libspdk_event_sock.so.5.0 00:02:28.436 SO libspdk_event_keyring.so.1.0 00:02:28.436 SYMLINK libspdk_event_fsdev.so 00:02:28.436 SYMLINK libspdk_event_scheduler.so 00:02:28.436 SYMLINK libspdk_event_vhost_blk.so 00:02:28.436 SYMLINK libspdk_event_vfu_tgt.so 00:02:28.436 SYMLINK libspdk_event_vmd.so 00:02:28.436 SYMLINK libspdk_event_iobuf.so 00:02:28.436 SYMLINK libspdk_event_sock.so 00:02:28.437 SYMLINK libspdk_event_keyring.so 00:02:28.696 CC module/event/subsystems/accel/accel.o 00:02:28.696 LIB libspdk_event_accel.a 00:02:28.955 SO libspdk_event_accel.so.6.0 00:02:28.955 SYMLINK libspdk_event_accel.so 00:02:29.215 CC module/event/subsystems/bdev/bdev.o 00:02:29.477 LIB libspdk_event_bdev.a 00:02:29.477 SO libspdk_event_bdev.so.6.0 00:02:29.477 SYMLINK libspdk_event_bdev.so 00:02:29.735 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:29.735 CC module/event/subsystems/scsi/scsi.o 00:02:29.735 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:29.735 CC module/event/subsystems/nbd/nbd.o 00:02:29.735 CC module/event/subsystems/ublk/ublk.o 00:02:29.993 LIB libspdk_event_nbd.a 00:02:29.993 LIB libspdk_event_ublk.a 00:02:29.993 LIB libspdk_event_scsi.a 00:02:29.993 SO libspdk_event_ublk.so.3.0 00:02:29.993 SO libspdk_event_nbd.so.6.0 00:02:29.993 SO libspdk_event_scsi.so.6.0 00:02:29.993 LIB libspdk_event_nvmf.a 00:02:29.993 SYMLINK libspdk_event_ublk.so 00:02:29.993 SYMLINK libspdk_event_nbd.so 00:02:29.993 SO libspdk_event_nvmf.so.6.0 00:02:29.993 SYMLINK libspdk_event_scsi.so 00:02:29.993 SYMLINK libspdk_event_nvmf.so 00:02:30.253 CC module/event/subsystems/iscsi/iscsi.o 00:02:30.253 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:30.512 LIB libspdk_event_vhost_scsi.a 00:02:30.512 LIB libspdk_event_iscsi.a 00:02:30.512 SO libspdk_event_vhost_scsi.so.3.0 00:02:30.512 SO libspdk_event_iscsi.so.6.0 00:02:30.512 SYMLINK libspdk_event_vhost_scsi.so 00:02:30.512 SYMLINK libspdk_event_iscsi.so 00:02:30.771 SO libspdk.so.6.0 00:02:30.771 SYMLINK libspdk.so 00:02:31.030 CXX app/trace/trace.o 00:02:31.030 CC app/trace_record/trace_record.o 00:02:31.030 CC app/spdk_lspci/spdk_lspci.o 00:02:31.030 CC app/spdk_nvme_discover/discovery_aer.o 00:02:31.030 CC app/spdk_top/spdk_top.o 00:02:31.030 CC app/spdk_nvme_perf/perf.o 00:02:31.030 CC app/spdk_nvme_identify/identify.o 00:02:31.030 TEST_HEADER include/spdk/accel_module.h 00:02:31.030 TEST_HEADER include/spdk/accel.h 00:02:31.030 TEST_HEADER include/spdk/assert.h 00:02:31.030 CC test/rpc_client/rpc_client_test.o 00:02:31.030 TEST_HEADER include/spdk/bdev.h 00:02:31.030 TEST_HEADER include/spdk/barrier.h 00:02:31.030 TEST_HEADER include/spdk/base64.h 00:02:31.030 TEST_HEADER include/spdk/bdev_module.h 00:02:31.030 TEST_HEADER include/spdk/bdev_zone.h 00:02:31.030 TEST_HEADER include/spdk/bit_pool.h 00:02:31.030 TEST_HEADER include/spdk/bit_array.h 00:02:31.030 TEST_HEADER include/spdk/blob_bdev.h 00:02:31.293 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:31.293 TEST_HEADER include/spdk/blobfs.h 00:02:31.293 TEST_HEADER include/spdk/blob.h 00:02:31.293 TEST_HEADER include/spdk/config.h 00:02:31.293 TEST_HEADER include/spdk/conf.h 00:02:31.293 TEST_HEADER include/spdk/cpuset.h 00:02:31.293 TEST_HEADER include/spdk/crc16.h 00:02:31.293 TEST_HEADER include/spdk/crc64.h 00:02:31.293 TEST_HEADER include/spdk/crc32.h 00:02:31.293 TEST_HEADER include/spdk/dif.h 00:02:31.293 TEST_HEADER include/spdk/dma.h 00:02:31.293 TEST_HEADER include/spdk/env_dpdk.h 00:02:31.293 TEST_HEADER include/spdk/endian.h 00:02:31.293 TEST_HEADER include/spdk/env.h 00:02:31.293 TEST_HEADER include/spdk/fd_group.h 00:02:31.293 TEST_HEADER include/spdk/event.h 00:02:31.293 TEST_HEADER include/spdk/fd.h 00:02:31.293 TEST_HEADER include/spdk/file.h 00:02:31.293 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:31.293 TEST_HEADER include/spdk/fsdev_module.h 00:02:31.293 TEST_HEADER include/spdk/fsdev.h 00:02:31.293 TEST_HEADER include/spdk/ftl.h 00:02:31.293 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:31.293 TEST_HEADER include/spdk/gpt_spec.h 00:02:31.293 TEST_HEADER include/spdk/hexlify.h 00:02:31.293 TEST_HEADER include/spdk/idxd.h 00:02:31.293 TEST_HEADER include/spdk/histogram_data.h 00:02:31.293 TEST_HEADER include/spdk/ioat.h 00:02:31.293 TEST_HEADER include/spdk/idxd_spec.h 00:02:31.293 TEST_HEADER include/spdk/init.h 00:02:31.293 TEST_HEADER include/spdk/ioat_spec.h 00:02:31.293 TEST_HEADER include/spdk/iscsi_spec.h 00:02:31.293 CC app/nvmf_tgt/nvmf_main.o 00:02:31.293 TEST_HEADER include/spdk/keyring.h 00:02:31.293 TEST_HEADER include/spdk/json.h 00:02:31.293 CC app/spdk_dd/spdk_dd.o 00:02:31.293 TEST_HEADER include/spdk/keyring_module.h 00:02:31.293 TEST_HEADER include/spdk/jsonrpc.h 00:02:31.293 TEST_HEADER include/spdk/likely.h 00:02:31.293 TEST_HEADER include/spdk/lvol.h 00:02:31.293 TEST_HEADER include/spdk/log.h 00:02:31.293 TEST_HEADER include/spdk/md5.h 00:02:31.293 TEST_HEADER include/spdk/memory.h 00:02:31.293 TEST_HEADER include/spdk/mmio.h 00:02:31.293 TEST_HEADER include/spdk/nbd.h 00:02:31.293 CC app/iscsi_tgt/iscsi_tgt.o 00:02:31.293 TEST_HEADER include/spdk/notify.h 00:02:31.293 TEST_HEADER include/spdk/net.h 00:02:31.293 TEST_HEADER include/spdk/nvme.h 00:02:31.293 TEST_HEADER include/spdk/nvme_intel.h 00:02:31.293 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:31.293 TEST_HEADER include/spdk/nvme_zns.h 00:02:31.293 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:31.293 TEST_HEADER include/spdk/nvme_spec.h 00:02:31.293 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:31.293 TEST_HEADER include/spdk/nvmf.h 00:02:31.293 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:31.293 TEST_HEADER include/spdk/opal.h 00:02:31.293 TEST_HEADER include/spdk/nvmf_transport.h 00:02:31.293 TEST_HEADER include/spdk/nvmf_spec.h 00:02:31.293 TEST_HEADER include/spdk/opal_spec.h 00:02:31.293 TEST_HEADER include/spdk/queue.h 00:02:31.293 TEST_HEADER include/spdk/pci_ids.h 00:02:31.293 TEST_HEADER include/spdk/pipe.h 00:02:31.293 TEST_HEADER include/spdk/reduce.h 00:02:31.293 TEST_HEADER include/spdk/rpc.h 00:02:31.293 TEST_HEADER include/spdk/scheduler.h 00:02:31.293 TEST_HEADER include/spdk/scsi.h 00:02:31.293 TEST_HEADER include/spdk/sock.h 00:02:31.293 TEST_HEADER include/spdk/scsi_spec.h 00:02:31.293 TEST_HEADER include/spdk/stdinc.h 00:02:31.293 TEST_HEADER include/spdk/string.h 00:02:31.293 TEST_HEADER include/spdk/thread.h 00:02:31.293 TEST_HEADER include/spdk/trace_parser.h 00:02:31.293 TEST_HEADER include/spdk/trace.h 00:02:31.293 TEST_HEADER include/spdk/ublk.h 00:02:31.293 TEST_HEADER include/spdk/tree.h 00:02:31.293 TEST_HEADER include/spdk/util.h 00:02:31.293 TEST_HEADER include/spdk/uuid.h 00:02:31.293 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:31.293 TEST_HEADER include/spdk/version.h 00:02:31.293 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:31.293 CC app/spdk_tgt/spdk_tgt.o 00:02:31.293 TEST_HEADER include/spdk/vmd.h 00:02:31.293 TEST_HEADER include/spdk/xor.h 00:02:31.293 TEST_HEADER include/spdk/vhost.h 00:02:31.293 TEST_HEADER include/spdk/zipf.h 00:02:31.293 CXX test/cpp_headers/accel.o 00:02:31.293 CXX test/cpp_headers/accel_module.o 00:02:31.293 CXX test/cpp_headers/assert.o 00:02:31.293 CXX test/cpp_headers/barrier.o 00:02:31.293 CXX test/cpp_headers/base64.o 00:02:31.293 CXX test/cpp_headers/bdev_zone.o 00:02:31.293 CXX test/cpp_headers/bdev.o 00:02:31.293 CXX test/cpp_headers/bdev_module.o 00:02:31.294 CXX test/cpp_headers/bit_array.o 00:02:31.294 CXX test/cpp_headers/bit_pool.o 00:02:31.294 CXX test/cpp_headers/blob_bdev.o 00:02:31.294 CXX test/cpp_headers/blobfs.o 00:02:31.294 CXX test/cpp_headers/blobfs_bdev.o 00:02:31.294 CXX test/cpp_headers/blob.o 00:02:31.294 CXX test/cpp_headers/config.o 00:02:31.294 CXX test/cpp_headers/conf.o 00:02:31.294 CXX test/cpp_headers/crc16.o 00:02:31.294 CXX test/cpp_headers/crc32.o 00:02:31.294 CXX test/cpp_headers/cpuset.o 00:02:31.294 CXX test/cpp_headers/crc64.o 00:02:31.294 CXX test/cpp_headers/dif.o 00:02:31.294 CXX test/cpp_headers/endian.o 00:02:31.294 CXX test/cpp_headers/dma.o 00:02:31.294 CXX test/cpp_headers/env.o 00:02:31.294 CXX test/cpp_headers/env_dpdk.o 00:02:31.294 CXX test/cpp_headers/fd_group.o 00:02:31.294 CXX test/cpp_headers/event.o 00:02:31.294 CXX test/cpp_headers/fd.o 00:02:31.294 CXX test/cpp_headers/file.o 00:02:31.294 CXX test/cpp_headers/fsdev_module.o 00:02:31.294 CXX test/cpp_headers/fsdev.o 00:02:31.294 CXX test/cpp_headers/fuse_dispatcher.o 00:02:31.294 CXX test/cpp_headers/ftl.o 00:02:31.294 CXX test/cpp_headers/gpt_spec.o 00:02:31.294 CXX test/cpp_headers/hexlify.o 00:02:31.294 CXX test/cpp_headers/idxd_spec.o 00:02:31.294 CXX test/cpp_headers/histogram_data.o 00:02:31.294 CXX test/cpp_headers/idxd.o 00:02:31.294 CXX test/cpp_headers/ioat.o 00:02:31.294 CXX test/cpp_headers/init.o 00:02:31.294 CXX test/cpp_headers/ioat_spec.o 00:02:31.294 CXX test/cpp_headers/json.o 00:02:31.294 CXX test/cpp_headers/iscsi_spec.o 00:02:31.294 CXX test/cpp_headers/jsonrpc.o 00:02:31.294 CXX test/cpp_headers/keyring.o 00:02:31.294 CXX test/cpp_headers/keyring_module.o 00:02:31.294 CXX test/cpp_headers/log.o 00:02:31.294 CXX test/cpp_headers/likely.o 00:02:31.294 CXX test/cpp_headers/md5.o 00:02:31.294 CXX test/cpp_headers/lvol.o 00:02:31.294 CXX test/cpp_headers/memory.o 00:02:31.294 CC test/thread/poller_perf/poller_perf.o 00:02:31.294 CXX test/cpp_headers/net.o 00:02:31.294 CXX test/cpp_headers/nbd.o 00:02:31.294 CXX test/cpp_headers/mmio.o 00:02:31.294 CXX test/cpp_headers/notify.o 00:02:31.294 CXX test/cpp_headers/nvme.o 00:02:31.294 CXX test/cpp_headers/nvme_intel.o 00:02:31.294 CXX test/cpp_headers/nvme_ocssd.o 00:02:31.294 CXX test/cpp_headers/nvme_spec.o 00:02:31.294 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:31.294 CXX test/cpp_headers/nvme_zns.o 00:02:31.294 CXX test/cpp_headers/nvmf_cmd.o 00:02:31.294 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:31.294 CXX test/cpp_headers/nvmf.o 00:02:31.294 CXX test/cpp_headers/nvmf_spec.o 00:02:31.294 CXX test/cpp_headers/nvmf_transport.o 00:02:31.294 CXX test/cpp_headers/opal.o 00:02:31.294 CC examples/util/zipf/zipf.o 00:02:31.294 CC test/app/jsoncat/jsoncat.o 00:02:31.294 CC test/env/vtophys/vtophys.o 00:02:31.294 CC examples/ioat/perf/perf.o 00:02:31.294 CC test/app/stub/stub.o 00:02:31.294 CC test/env/pci/pci_ut.o 00:02:31.294 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:31.294 CC test/app/histogram_perf/histogram_perf.o 00:02:31.294 CC test/env/memory/memory_ut.o 00:02:31.294 CC test/dma/test_dma/test_dma.o 00:02:31.294 CC examples/ioat/verify/verify.o 00:02:31.294 CXX test/cpp_headers/opal_spec.o 00:02:31.294 CC app/fio/nvme/fio_plugin.o 00:02:31.559 CC app/fio/bdev/fio_plugin.o 00:02:31.559 CC test/app/bdev_svc/bdev_svc.o 00:02:31.559 LINK spdk_nvme_discover 00:02:31.559 LINK spdk_lspci 00:02:31.559 LINK nvmf_tgt 00:02:31.825 CC test/env/mem_callbacks/mem_callbacks.o 00:02:31.825 LINK rpc_client_test 00:02:31.825 LINK iscsi_tgt 00:02:31.825 LINK vtophys 00:02:31.825 LINK spdk_tgt 00:02:31.825 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:31.825 LINK interrupt_tgt 00:02:31.825 LINK histogram_perf 00:02:31.825 LINK zipf 00:02:31.825 CXX test/cpp_headers/pci_ids.o 00:02:31.825 CXX test/cpp_headers/pipe.o 00:02:31.825 CXX test/cpp_headers/queue.o 00:02:31.825 CXX test/cpp_headers/reduce.o 00:02:31.826 LINK stub 00:02:31.826 CXX test/cpp_headers/rpc.o 00:02:31.826 CXX test/cpp_headers/scheduler.o 00:02:31.826 CXX test/cpp_headers/scsi.o 00:02:31.826 LINK poller_perf 00:02:31.826 CXX test/cpp_headers/scsi_spec.o 00:02:31.826 CXX test/cpp_headers/sock.o 00:02:31.826 CXX test/cpp_headers/stdinc.o 00:02:31.826 CXX test/cpp_headers/string.o 00:02:31.826 CXX test/cpp_headers/thread.o 00:02:31.826 CXX test/cpp_headers/trace.o 00:02:31.826 CXX test/cpp_headers/trace_parser.o 00:02:31.826 LINK spdk_trace_record 00:02:31.826 CXX test/cpp_headers/ublk.o 00:02:31.826 CXX test/cpp_headers/tree.o 00:02:31.826 CXX test/cpp_headers/util.o 00:02:31.826 LINK jsoncat 00:02:31.826 CXX test/cpp_headers/uuid.o 00:02:31.826 CXX test/cpp_headers/version.o 00:02:31.826 CXX test/cpp_headers/vfio_user_pci.o 00:02:31.826 CXX test/cpp_headers/vfio_user_spec.o 00:02:31.826 CXX test/cpp_headers/vhost.o 00:02:32.087 CXX test/cpp_headers/vmd.o 00:02:32.087 CXX test/cpp_headers/xor.o 00:02:32.087 LINK ioat_perf 00:02:32.087 CXX test/cpp_headers/zipf.o 00:02:32.087 LINK spdk_dd 00:02:32.087 LINK env_dpdk_post_init 00:02:32.087 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:32.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:32.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:32.087 LINK verify 00:02:32.087 LINK bdev_svc 00:02:32.087 LINK pci_ut 00:02:32.087 LINK spdk_trace 00:02:32.346 LINK spdk_bdev 00:02:32.346 LINK test_dma 00:02:32.346 LINK spdk_nvme 00:02:32.346 CC examples/idxd/perf/perf.o 00:02:32.346 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.346 CC examples/vmd/led/led.o 00:02:32.346 CC examples/sock/hello_world/hello_sock.o 00:02:32.346 CC test/event/reactor/reactor.o 00:02:32.346 LINK mem_callbacks 00:02:32.346 CC examples/thread/thread/thread_ex.o 00:02:32.346 CC test/event/app_repeat/app_repeat.o 00:02:32.346 CC test/event/reactor_perf/reactor_perf.o 00:02:32.346 CC test/event/event_perf/event_perf.o 00:02:32.346 CC test/event/scheduler/scheduler.o 00:02:32.346 LINK nvme_fuzz 00:02:32.346 LINK spdk_top 00:02:32.605 LINK spdk_nvme_perf 00:02:32.605 LINK vhost_fuzz 00:02:32.605 LINK lsvmd 00:02:32.605 LINK spdk_nvme_identify 00:02:32.605 LINK led 00:02:32.605 LINK reactor_perf 00:02:32.605 LINK reactor 00:02:32.605 CC app/vhost/vhost.o 00:02:32.605 LINK app_repeat 00:02:32.605 LINK event_perf 00:02:32.605 LINK hello_sock 00:02:32.605 LINK idxd_perf 00:02:32.605 LINK scheduler 00:02:32.605 LINK thread 00:02:32.863 LINK vhost 00:02:32.863 CC test/nvme/e2edp/nvme_dp.o 00:02:32.863 CC test/nvme/reset/reset.o 00:02:32.863 CC test/nvme/overhead/overhead.o 00:02:32.863 CC test/nvme/connect_stress/connect_stress.o 00:02:32.863 CC test/nvme/reserve/reserve.o 00:02:32.863 CC test/nvme/aer/aer.o 00:02:32.863 CC test/nvme/sgl/sgl.o 00:02:32.863 CC test/nvme/cuse/cuse.o 00:02:32.863 CC test/nvme/fdp/fdp.o 00:02:32.863 CC test/nvme/err_injection/err_injection.o 00:02:32.863 CC test/nvme/boot_partition/boot_partition.o 00:02:32.863 CC test/nvme/startup/startup.o 00:02:32.863 CC test/nvme/simple_copy/simple_copy.o 00:02:32.863 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.863 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.863 CC test/nvme/compliance/nvme_compliance.o 00:02:32.863 LINK memory_ut 00:02:32.863 CC test/blobfs/mkfs/mkfs.o 00:02:32.863 CC test/accel/dif/dif.o 00:02:32.863 CC test/lvol/esnap/esnap.o 00:02:33.121 LINK startup 00:02:33.121 LINK boot_partition 00:02:33.121 LINK err_injection 00:02:33.121 CC examples/nvme/abort/abort.o 00:02:33.121 LINK fused_ordering 00:02:33.121 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:33.121 CC examples/nvme/arbitration/arbitration.o 00:02:33.121 LINK connect_stress 00:02:33.121 CC examples/nvme/hotplug/hotplug.o 00:02:33.121 CC examples/nvme/reconnect/reconnect.o 00:02:33.121 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:33.121 CC examples/nvme/hello_world/hello_world.o 00:02:33.121 LINK doorbell_aers 00:02:33.121 LINK reserve 00:02:33.121 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:33.121 LINK nvme_dp 00:02:33.121 LINK simple_copy 00:02:33.121 LINK mkfs 00:02:33.121 LINK overhead 00:02:33.121 LINK reset 00:02:33.121 LINK sgl 00:02:33.121 LINK aer 00:02:33.121 CC examples/accel/perf/accel_perf.o 00:02:33.121 LINK nvme_compliance 00:02:33.121 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:33.121 LINK fdp 00:02:33.121 CC examples/blob/cli/blobcli.o 00:02:33.121 CC examples/blob/hello_world/hello_blob.o 00:02:33.380 LINK pmr_persistence 00:02:33.380 LINK cmb_copy 00:02:33.380 LINK hello_world 00:02:33.380 LINK hotplug 00:02:33.380 LINK reconnect 00:02:33.380 LINK arbitration 00:02:33.380 LINK abort 00:02:33.380 LINK hello_blob 00:02:33.380 LINK hello_fsdev 00:02:33.380 LINK dif 00:02:33.380 LINK nvme_manage 00:02:33.380 LINK iscsi_fuzz 00:02:33.639 LINK accel_perf 00:02:33.639 LINK blobcli 00:02:33.898 LINK cuse 00:02:33.898 CC test/bdev/bdevio/bdevio.o 00:02:34.157 CC examples/bdev/hello_world/hello_bdev.o 00:02:34.157 CC examples/bdev/bdevperf/bdevperf.o 00:02:34.157 LINK hello_bdev 00:02:34.415 LINK bdevio 00:02:34.674 LINK bdevperf 00:02:35.241 CC examples/nvmf/nvmf/nvmf.o 00:02:35.499 LINK nvmf 00:02:36.436 LINK esnap 00:02:36.695 00:02:36.696 real 0m55.224s 00:02:36.696 user 8m1.143s 00:02:36.696 sys 3m41.640s 00:02:36.696 15:11:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.696 15:11:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:36.696 ************************************ 00:02:36.696 END TEST make 00:02:36.696 ************************************ 00:02:36.955 15:11:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:36.955 15:11:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:36.955 15:11:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:36.955 15:11:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.955 15:11:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:36.955 15:11:40 -- pm/common@44 -- $ pid=1886449 00:02:36.955 15:11:40 -- pm/common@50 -- $ kill -TERM 1886449 00:02:36.955 15:11:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.955 15:11:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:36.955 15:11:40 -- pm/common@44 -- $ pid=1886450 00:02:36.955 15:11:40 -- pm/common@50 -- $ kill -TERM 1886450 00:02:36.955 15:11:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.955 15:11:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:36.955 15:11:40 -- pm/common@44 -- $ pid=1886452 00:02:36.955 15:11:40 -- pm/common@50 -- $ kill -TERM 1886452 00:02:36.955 15:11:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.955 15:11:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:36.955 15:11:40 -- pm/common@44 -- $ pid=1886477 00:02:36.955 15:11:40 -- pm/common@50 -- $ sudo -E kill -TERM 1886477 00:02:36.955 15:11:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:36.955 15:11:40 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:36.955 15:11:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:36.955 15:11:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:36.955 15:11:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:36.955 15:11:40 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:36.955 15:11:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:36.955 15:11:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:36.955 15:11:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:36.955 15:11:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:36.955 15:11:40 -- scripts/common.sh@336 -- # read -ra ver1 00:02:36.955 15:11:40 -- scripts/common.sh@337 -- # IFS=.-: 00:02:36.955 15:11:40 -- scripts/common.sh@337 -- # read -ra ver2 00:02:36.955 15:11:40 -- scripts/common.sh@338 -- # local 'op=<' 00:02:36.955 15:11:40 -- scripts/common.sh@340 -- # ver1_l=2 00:02:36.955 15:11:40 -- scripts/common.sh@341 -- # ver2_l=1 00:02:36.955 15:11:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:36.955 15:11:40 -- scripts/common.sh@344 -- # case "$op" in 00:02:36.955 15:11:40 -- scripts/common.sh@345 -- # : 1 00:02:36.955 15:11:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:36.955 15:11:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.955 15:11:40 -- scripts/common.sh@365 -- # decimal 1 00:02:36.955 15:11:40 -- scripts/common.sh@353 -- # local d=1 00:02:36.955 15:11:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:36.955 15:11:40 -- scripts/common.sh@355 -- # echo 1 00:02:36.955 15:11:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:36.955 15:11:40 -- scripts/common.sh@366 -- # decimal 2 00:02:36.955 15:11:40 -- scripts/common.sh@353 -- # local d=2 00:02:36.955 15:11:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:36.955 15:11:40 -- scripts/common.sh@355 -- # echo 2 00:02:36.955 15:11:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:36.955 15:11:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:36.955 15:11:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:36.955 15:11:40 -- scripts/common.sh@368 -- # return 0 00:02:36.955 15:11:40 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:36.955 15:11:40 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.955 --rc genhtml_branch_coverage=1 00:02:36.955 --rc genhtml_function_coverage=1 00:02:36.955 --rc genhtml_legend=1 00:02:36.955 --rc geninfo_all_blocks=1 00:02:36.955 --rc geninfo_unexecuted_blocks=1 00:02:36.955 00:02:36.955 ' 00:02:36.955 15:11:40 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:36.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.955 --rc genhtml_branch_coverage=1 00:02:36.955 --rc genhtml_function_coverage=1 00:02:36.956 --rc genhtml_legend=1 00:02:36.956 --rc geninfo_all_blocks=1 00:02:36.956 --rc geninfo_unexecuted_blocks=1 00:02:36.956 00:02:36.956 ' 00:02:36.956 15:11:40 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:36.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.956 --rc genhtml_branch_coverage=1 00:02:36.956 --rc genhtml_function_coverage=1 00:02:36.956 --rc genhtml_legend=1 00:02:36.956 --rc geninfo_all_blocks=1 00:02:36.956 --rc geninfo_unexecuted_blocks=1 00:02:36.956 00:02:36.956 ' 00:02:36.956 15:11:40 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:36.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.956 --rc genhtml_branch_coverage=1 00:02:36.956 --rc genhtml_function_coverage=1 00:02:36.956 --rc genhtml_legend=1 00:02:36.956 --rc geninfo_all_blocks=1 00:02:36.956 --rc geninfo_unexecuted_blocks=1 00:02:36.956 00:02:36.956 ' 00:02:36.956 15:11:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:36.956 15:11:40 -- nvmf/common.sh@7 -- # uname -s 00:02:36.956 15:11:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:36.956 15:11:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:36.956 15:11:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:36.956 15:11:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:36.956 15:11:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:36.956 15:11:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:36.956 15:11:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:36.956 15:11:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:36.956 15:11:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:36.956 15:11:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:36.956 15:11:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:36.956 15:11:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:36.956 15:11:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:36.956 15:11:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:36.956 15:11:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:36.956 15:11:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:36.956 15:11:40 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:36.956 15:11:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:36.956 15:11:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:36.956 15:11:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.956 15:11:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.956 15:11:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.956 15:11:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.956 15:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.956 15:11:40 -- paths/export.sh@5 -- # export PATH 00:02:36.956 15:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.956 15:11:40 -- nvmf/common.sh@51 -- # : 0 00:02:36.956 15:11:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:36.956 15:11:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:36.956 15:11:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:36.956 15:11:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:36.956 15:11:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:36.956 15:11:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:36.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:36.956 15:11:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:36.956 15:11:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:36.956 15:11:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:36.956 15:11:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:36.956 15:11:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:36.956 15:11:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:36.956 15:11:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:36.956 15:11:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.956 15:11:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:36.956 15:11:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.956 15:11:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:36.956 15:11:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.216 15:11:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.216 15:11:40 -- spdk/autotest.sh@48 -- # udevadm_pid=1948693 00:02:37.216 15:11:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.216 15:11:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.216 15:11:40 -- pm/common@17 -- # local monitor 00:02:37.216 15:11:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.216 15:11:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.216 15:11:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.216 15:11:40 -- pm/common@21 -- # date +%s 00:02:37.216 15:11:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.216 15:11:40 -- pm/common@21 -- # date +%s 00:02:37.216 15:11:40 -- pm/common@25 -- # sleep 1 00:02:37.216 15:11:40 -- pm/common@21 -- # date +%s 00:02:37.216 15:11:40 -- pm/common@21 -- # date +%s 00:02:37.216 15:11:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732111900 00:02:37.216 15:11:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732111900 00:02:37.216 15:11:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732111900 00:02:37.216 15:11:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732111900 00:02:37.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732111900_collect-cpu-load.pm.log 00:02:37.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732111900_collect-vmstat.pm.log 00:02:37.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732111900_collect-cpu-temp.pm.log 00:02:37.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732111900_collect-bmc-pm.bmc.pm.log 00:02:38.154 15:11:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.154 15:11:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.154 15:11:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:38.154 15:11:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.154 15:11:41 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.154 15:11:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:38.154 15:11:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.154 15:11:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:38.154 15:11:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.154 15:11:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.154 15:11:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:38.154 15:11:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.154 15:11:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:38.154 15:11:41 -- common/autotest_common.sh@1457 -- # uname 00:02:38.154 15:11:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:38.154 15:11:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:38.154 15:11:41 -- common/autotest_common.sh@1477 -- # uname 00:02:38.154 15:11:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:38.154 15:11:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:38.154 15:11:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:38.154 lcov: LCOV version 1.15 00:02:38.154 15:11:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:53.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:53.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.247 15:12:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:05.247 15:12:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:05.247 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:03:05.247 15:12:07 -- spdk/autotest.sh@78 -- # rm -f 00:03:05.247 15:12:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.187 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:06.187 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.187 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.445 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.445 15:12:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.445 15:12:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:06.445 15:12:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:06.445 15:12:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:06.445 15:12:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:06.445 15:12:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:06.445 15:12:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:06.445 15:12:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.446 15:12:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:06.446 15:12:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.446 15:12:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.446 15:12:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.446 15:12:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.446 15:12:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.446 15:12:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.446 No valid GPT data, bailing 00:03:06.446 15:12:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.446 15:12:10 -- scripts/common.sh@394 -- # pt= 00:03:06.446 15:12:10 -- scripts/common.sh@395 -- # return 1 00:03:06.446 15:12:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.446 1+0 records in 00:03:06.446 1+0 records out 00:03:06.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440326 s, 238 MB/s 00:03:06.446 15:12:10 -- spdk/autotest.sh@105 -- # sync 00:03:06.446 15:12:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.446 15:12:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.446 15:12:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.014 15:12:15 -- spdk/autotest.sh@111 -- # uname -s 00:03:13.014 15:12:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:13.014 15:12:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:13.014 15:12:15 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:14.921 Hugepages 00:03:14.921 node hugesize free / total 00:03:14.921 node0 1048576kB 0 / 0 00:03:14.921 node0 2048kB 0 / 0 00:03:14.921 node1 1048576kB 0 / 0 00:03:14.921 node1 2048kB 0 / 0 00:03:14.921 00:03:14.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:14.921 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:14.921 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:14.921 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:14.921 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:14.921 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:14.921 15:12:18 -- spdk/autotest.sh@117 -- # uname -s 00:03:14.921 15:12:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:14.921 15:12:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:14.921 15:12:18 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.209 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.209 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.775 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.775 15:12:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:20.157 15:12:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:20.157 15:12:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:20.157 15:12:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:20.157 15:12:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:20.157 15:12:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:20.157 15:12:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:20.157 15:12:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:20.157 15:12:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:20.157 15:12:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:20.157 15:12:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:20.157 15:12:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:20.157 15:12:23 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.690 Waiting for block devices as requested 00:03:22.690 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:22.949 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:22.949 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:22.949 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:23.207 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:23.207 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:23.207 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:23.465 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:23.465 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:23.465 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:23.465 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:23.724 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:23.724 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:23.724 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:23.983 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:23.983 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:23.983 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:24.242 15:12:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:24.242 15:12:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:24.242 15:12:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:24.242 15:12:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:24.242 15:12:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:24.242 15:12:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:24.242 15:12:27 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:24.242 15:12:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:24.242 15:12:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:24.242 15:12:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:24.242 15:12:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:24.242 15:12:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:24.242 15:12:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:24.242 15:12:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:24.242 15:12:27 -- common/autotest_common.sh@1543 -- # continue 00:03:24.242 15:12:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:24.242 15:12:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:24.242 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:03:24.242 15:12:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:24.242 15:12:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.242 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:03:24.242 15:12:27 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.533 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.533 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.102 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.102 15:12:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:28.102 15:12:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:28.102 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:03:28.102 15:12:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:28.102 15:12:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:28.102 15:12:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:28.102 15:12:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:28.102 15:12:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:28.102 15:12:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:28.102 15:12:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:28.102 15:12:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:28.102 15:12:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:28.102 15:12:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:28.102 15:12:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.102 15:12:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.102 15:12:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:28.102 15:12:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:28.102 15:12:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:28.102 15:12:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:28.102 15:12:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:28.361 15:12:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:28.361 15:12:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:28.361 15:12:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:28.361 15:12:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:28.361 15:12:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:28.361 15:12:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:28.361 15:12:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1963654 00:03:28.361 15:12:32 -- common/autotest_common.sh@1585 -- # waitforlisten 1963654 00:03:28.361 15:12:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:28.361 15:12:32 -- common/autotest_common.sh@835 -- # '[' -z 1963654 ']' 00:03:28.361 15:12:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.361 15:12:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:28.361 15:12:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.361 15:12:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:28.361 15:12:32 -- common/autotest_common.sh@10 -- # set +x 00:03:28.361 [2024-11-20 15:12:32.064772] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:28.361 [2024-11-20 15:12:32.064824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963654 ] 00:03:28.361 [2024-11-20 15:12:32.139704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.361 [2024-11-20 15:12:32.182989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.620 15:12:32 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:28.620 15:12:32 -- common/autotest_common.sh@868 -- # return 0 00:03:28.620 15:12:32 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:28.620 15:12:32 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:28.620 15:12:32 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:31.993 nvme0n1 00:03:31.994 15:12:35 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:31.994 [2024-11-20 15:12:35.594029] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:31.994 request: 00:03:31.994 { 00:03:31.994 "nvme_ctrlr_name": "nvme0", 00:03:31.994 "password": "test", 00:03:31.994 "method": "bdev_nvme_opal_revert", 00:03:31.994 "req_id": 1 00:03:31.994 } 00:03:31.994 Got JSON-RPC error response 00:03:31.994 response: 00:03:31.994 { 00:03:31.994 "code": -32602, 00:03:31.994 "message": "Invalid parameters" 00:03:31.994 } 00:03:31.994 15:12:35 -- common/autotest_common.sh@1591 -- # true 00:03:31.994 15:12:35 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:31.994 15:12:35 -- common/autotest_common.sh@1595 -- # killprocess 1963654 00:03:31.994 15:12:35 -- common/autotest_common.sh@954 -- # '[' -z 1963654 ']' 00:03:31.994 15:12:35 -- common/autotest_common.sh@958 -- # kill -0 1963654 00:03:31.994 15:12:35 -- common/autotest_common.sh@959 -- # uname 00:03:31.994 15:12:35 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:31.994 15:12:35 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963654 00:03:31.994 15:12:35 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:31.994 15:12:35 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:31.994 15:12:35 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963654' 00:03:31.994 killing process with pid 1963654 00:03:31.994 15:12:35 -- common/autotest_common.sh@973 -- # kill 1963654 00:03:31.994 15:12:35 -- common/autotest_common.sh@978 -- # wait 1963654 00:03:33.370 15:12:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:33.370 15:12:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:33.370 15:12:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:33.370 15:12:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:33.370 15:12:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:33.370 15:12:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.370 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:03:33.370 15:12:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:33.370 15:12:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.370 15:12:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.370 15:12:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.370 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:03:33.630 ************************************ 00:03:33.630 START TEST env 00:03:33.630 ************************************ 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.630 * Looking for test storage... 00:03:33.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.630 15:12:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.630 15:12:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.630 15:12:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.630 15:12:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.630 15:12:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.630 15:12:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.630 15:12:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.630 15:12:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.630 15:12:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.630 15:12:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.630 15:12:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.630 15:12:37 env -- scripts/common.sh@344 -- # case "$op" in 00:03:33.630 15:12:37 env -- scripts/common.sh@345 -- # : 1 00:03:33.630 15:12:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.630 15:12:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.630 15:12:37 env -- scripts/common.sh@365 -- # decimal 1 00:03:33.630 15:12:37 env -- scripts/common.sh@353 -- # local d=1 00:03:33.630 15:12:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.630 15:12:37 env -- scripts/common.sh@355 -- # echo 1 00:03:33.630 15:12:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.630 15:12:37 env -- scripts/common.sh@366 -- # decimal 2 00:03:33.630 15:12:37 env -- scripts/common.sh@353 -- # local d=2 00:03:33.630 15:12:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.630 15:12:37 env -- scripts/common.sh@355 -- # echo 2 00:03:33.630 15:12:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.630 15:12:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.630 15:12:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.630 15:12:37 env -- scripts/common.sh@368 -- # return 0 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.630 --rc genhtml_branch_coverage=1 00:03:33.630 --rc genhtml_function_coverage=1 00:03:33.630 --rc genhtml_legend=1 00:03:33.630 --rc geninfo_all_blocks=1 00:03:33.630 --rc geninfo_unexecuted_blocks=1 00:03:33.630 00:03:33.630 ' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.630 --rc genhtml_branch_coverage=1 00:03:33.630 --rc genhtml_function_coverage=1 00:03:33.630 --rc genhtml_legend=1 00:03:33.630 --rc geninfo_all_blocks=1 00:03:33.630 --rc geninfo_unexecuted_blocks=1 00:03:33.630 00:03:33.630 ' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.630 --rc genhtml_branch_coverage=1 00:03:33.630 --rc genhtml_function_coverage=1 00:03:33.630 --rc genhtml_legend=1 00:03:33.630 --rc geninfo_all_blocks=1 00:03:33.630 --rc geninfo_unexecuted_blocks=1 00:03:33.630 00:03:33.630 ' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.630 --rc genhtml_branch_coverage=1 00:03:33.630 --rc genhtml_function_coverage=1 00:03:33.630 --rc genhtml_legend=1 00:03:33.630 --rc geninfo_all_blocks=1 00:03:33.630 --rc geninfo_unexecuted_blocks=1 00:03:33.630 00:03:33.630 ' 00:03:33.630 15:12:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.630 15:12:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.630 15:12:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.630 ************************************ 00:03:33.630 START TEST env_memory 00:03:33.630 ************************************ 00:03:33.630 15:12:37 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.630 00:03:33.630 00:03:33.630 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.630 http://cunit.sourceforge.net/ 00:03:33.630 00:03:33.630 00:03:33.630 Suite: memory 00:03:33.890 Test: alloc and free memory map ...[2024-11-20 15:12:37.551305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:33.890 passed 00:03:33.890 Test: mem map translation ...[2024-11-20 15:12:37.570582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:33.891 [2024-11-20 15:12:37.570597] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:33.891 [2024-11-20 15:12:37.570634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:33.891 [2024-11-20 15:12:37.570640] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:33.891 passed 00:03:33.891 Test: mem map registration ...[2024-11-20 15:12:37.609028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:33.891 [2024-11-20 15:12:37.609044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:33.891 passed 00:03:33.891 Test: mem map adjacent registrations ...passed 00:03:33.891 00:03:33.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.891 suites 1 1 n/a 0 0 00:03:33.891 tests 4 4 4 0 0 00:03:33.891 asserts 152 152 152 0 n/a 00:03:33.891 00:03:33.891 Elapsed time = 0.140 seconds 00:03:33.891 00:03:33.891 real 0m0.153s 00:03:33.891 user 0m0.145s 00:03:33.891 sys 0m0.007s 00:03:33.891 15:12:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.891 15:12:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:33.891 ************************************ 00:03:33.891 END TEST env_memory 00:03:33.891 ************************************ 00:03:33.891 15:12:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.891 15:12:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.891 15:12:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.891 15:12:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.891 ************************************ 00:03:33.891 START TEST env_vtophys 00:03:33.891 ************************************ 00:03:33.891 15:12:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.891 EAL: lib.eal log level changed from notice to debug 00:03:33.891 EAL: Detected lcore 0 as core 0 on socket 0 00:03:33.891 EAL: Detected lcore 1 as core 1 on socket 0 00:03:33.891 EAL: Detected lcore 2 as core 2 on socket 0 00:03:33.891 EAL: Detected lcore 3 as core 3 on socket 0 00:03:33.891 EAL: Detected lcore 4 as core 4 on socket 0 00:03:33.891 EAL: Detected lcore 5 as core 5 on socket 0 00:03:33.891 EAL: Detected lcore 6 as core 6 on socket 0 00:03:33.891 EAL: Detected lcore 7 as core 8 on socket 0 00:03:33.891 EAL: Detected lcore 8 as core 9 on socket 0 00:03:33.891 EAL: Detected lcore 9 as core 10 on socket 0 00:03:33.891 EAL: Detected lcore 10 as core 11 on socket 0 00:03:33.891 EAL: Detected lcore 11 as core 12 on socket 0 00:03:33.891 EAL: Detected lcore 12 as core 13 on socket 0 00:03:33.891 EAL: Detected lcore 13 as core 16 on socket 0 00:03:33.891 EAL: Detected lcore 14 as core 17 on socket 0 00:03:33.891 EAL: Detected lcore 15 as core 18 on socket 0 00:03:33.891 EAL: Detected lcore 16 as core 19 on socket 0 00:03:33.891 EAL: Detected lcore 17 as core 20 on socket 0 00:03:33.891 EAL: Detected lcore 18 as core 21 on socket 0 00:03:33.891 EAL: Detected lcore 19 as core 25 on socket 0 00:03:33.891 EAL: Detected lcore 20 as core 26 on socket 0 00:03:33.891 EAL: Detected lcore 21 as core 27 on socket 0 00:03:33.891 EAL: Detected lcore 22 as core 28 on socket 0 00:03:33.891 EAL: Detected lcore 23 as core 29 on socket 0 00:03:33.891 EAL: Detected lcore 24 as core 0 on socket 1 00:03:33.891 EAL: Detected lcore 25 as core 1 on socket 1 00:03:33.891 EAL: Detected lcore 26 as core 2 on socket 1 00:03:33.891 EAL: Detected lcore 27 as core 3 on socket 1 00:03:33.891 EAL: Detected lcore 28 as core 4 on socket 1 00:03:33.891 EAL: Detected lcore 29 as core 5 on socket 1 00:03:33.891 EAL: Detected lcore 30 as core 6 on socket 1 00:03:33.891 EAL: Detected lcore 31 as core 9 on socket 1 00:03:33.891 EAL: Detected lcore 32 as core 10 on socket 1 00:03:33.891 EAL: Detected lcore 33 as core 11 on socket 1 00:03:33.891 EAL: Detected lcore 34 as core 12 on socket 1 00:03:33.891 EAL: Detected lcore 35 as core 13 on socket 1 00:03:33.891 EAL: Detected lcore 36 as core 16 on socket 1 00:03:33.891 EAL: Detected lcore 37 as core 17 on socket 1 00:03:33.891 EAL: Detected lcore 38 as core 18 on socket 1 00:03:33.891 EAL: Detected lcore 39 as core 19 on socket 1 00:03:33.891 EAL: Detected lcore 40 as core 20 on socket 1 00:03:33.891 EAL: Detected lcore 41 as core 21 on socket 1 00:03:33.891 EAL: Detected lcore 42 as core 24 on socket 1 00:03:33.891 EAL: Detected lcore 43 as core 25 on socket 1 00:03:33.891 EAL: Detected lcore 44 as core 26 on socket 1 00:03:33.891 EAL: Detected lcore 45 as core 27 on socket 1 00:03:33.891 EAL: Detected lcore 46 as core 28 on socket 1 00:03:33.891 EAL: Detected lcore 47 as core 29 on socket 1 00:03:33.891 EAL: Detected lcore 48 as core 0 on socket 0 00:03:33.891 EAL: Detected lcore 49 as core 1 on socket 0 00:03:33.891 EAL: Detected lcore 50 as core 2 on socket 0 00:03:33.891 EAL: Detected lcore 51 as core 3 on socket 0 00:03:33.891 EAL: Detected lcore 52 as core 4 on socket 0 00:03:33.891 EAL: Detected lcore 53 as core 5 on socket 0 00:03:33.891 EAL: Detected lcore 54 as core 6 on socket 0 00:03:33.891 EAL: Detected lcore 55 as core 8 on socket 0 00:03:33.891 EAL: Detected lcore 56 as core 9 on socket 0 00:03:33.891 EAL: Detected lcore 57 as core 10 on socket 0 00:03:33.891 EAL: Detected lcore 58 as core 11 on socket 0 00:03:33.891 EAL: Detected lcore 59 as core 12 on socket 0 00:03:33.891 EAL: Detected lcore 60 as core 13 on socket 0 00:03:33.891 EAL: Detected lcore 61 as core 16 on socket 0 00:03:33.891 EAL: Detected lcore 62 as core 17 on socket 0 00:03:33.891 EAL: Detected lcore 63 as core 18 on socket 0 00:03:33.891 EAL: Detected lcore 64 as core 19 on socket 0 00:03:33.891 EAL: Detected lcore 65 as core 20 on socket 0 00:03:33.891 EAL: Detected lcore 66 as core 21 on socket 0 00:03:33.891 EAL: Detected lcore 67 as core 25 on socket 0 00:03:33.891 EAL: Detected lcore 68 as core 26 on socket 0 00:03:33.891 EAL: Detected lcore 69 as core 27 on socket 0 00:03:33.891 EAL: Detected lcore 70 as core 28 on socket 0 00:03:33.891 EAL: Detected lcore 71 as core 29 on socket 0 00:03:33.891 EAL: Detected lcore 72 as core 0 on socket 1 00:03:33.891 EAL: Detected lcore 73 as core 1 on socket 1 00:03:33.891 EAL: Detected lcore 74 as core 2 on socket 1 00:03:33.891 EAL: Detected lcore 75 as core 3 on socket 1 00:03:33.891 EAL: Detected lcore 76 as core 4 on socket 1 00:03:33.891 EAL: Detected lcore 77 as core 5 on socket 1 00:03:33.891 EAL: Detected lcore 78 as core 6 on socket 1 00:03:33.891 EAL: Detected lcore 79 as core 9 on socket 1 00:03:33.891 EAL: Detected lcore 80 as core 10 on socket 1 00:03:33.891 EAL: Detected lcore 81 as core 11 on socket 1 00:03:33.891 EAL: Detected lcore 82 as core 12 on socket 1 00:03:33.891 EAL: Detected lcore 83 as core 13 on socket 1 00:03:33.891 EAL: Detected lcore 84 as core 16 on socket 1 00:03:33.891 EAL: Detected lcore 85 as core 17 on socket 1 00:03:33.891 EAL: Detected lcore 86 as core 18 on socket 1 00:03:33.891 EAL: Detected lcore 87 as core 19 on socket 1 00:03:33.891 EAL: Detected lcore 88 as core 20 on socket 1 00:03:33.891 EAL: Detected lcore 89 as core 21 on socket 1 00:03:33.891 EAL: Detected lcore 90 as core 24 on socket 1 00:03:33.891 EAL: Detected lcore 91 as core 25 on socket 1 00:03:33.891 EAL: Detected lcore 92 as core 26 on socket 1 00:03:33.891 EAL: Detected lcore 93 as core 27 on socket 1 00:03:33.891 EAL: Detected lcore 94 as core 28 on socket 1 00:03:33.891 EAL: Detected lcore 95 as core 29 on socket 1 00:03:33.891 EAL: Maximum logical cores by configuration: 128 00:03:33.891 EAL: Detected CPU lcores: 96 00:03:33.891 EAL: Detected NUMA nodes: 2 00:03:33.891 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:33.891 EAL: Detected shared linkage of DPDK 00:03:33.891 EAL: No shared files mode enabled, IPC will be disabled 00:03:33.891 EAL: Bus pci wants IOVA as 'DC' 00:03:33.891 EAL: Buses did not request a specific IOVA mode. 00:03:33.891 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:33.891 EAL: Selected IOVA mode 'VA' 00:03:33.891 EAL: Probing VFIO support... 00:03:33.891 EAL: IOMMU type 1 (Type 1) is supported 00:03:33.891 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:33.891 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:33.891 EAL: VFIO support initialized 00:03:33.891 EAL: Ask a virtual area of 0x2e000 bytes 00:03:33.891 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:33.891 EAL: Setting up physically contiguous memory... 00:03:33.891 EAL: Setting maximum number of open files to 524288 00:03:33.891 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:33.891 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:33.891 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:33.891 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.891 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:33.891 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.891 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.891 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:33.891 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:33.891 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.891 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:33.891 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.891 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.891 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:33.891 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:33.891 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.891 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:33.891 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.891 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.891 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:33.891 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:33.891 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.891 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:33.891 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.891 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.891 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:33.891 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:33.892 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:33.892 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.892 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:33.892 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.892 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.892 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:33.892 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:33.892 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.892 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:33.892 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.892 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.892 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:33.892 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:33.892 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.892 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:33.892 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.892 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.892 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:33.892 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:33.892 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.892 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:33.892 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.892 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.892 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:33.892 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:33.892 EAL: Hugepages will be freed exactly as allocated. 00:03:33.892 EAL: No shared files mode enabled, IPC is disabled 00:03:33.892 EAL: No shared files mode enabled, IPC is disabled 00:03:33.892 EAL: TSC frequency is ~2300000 KHz 00:03:33.892 EAL: Main lcore 0 is ready (tid=7f25c9603a00;cpuset=[0]) 00:03:33.892 EAL: Trying to obtain current memory policy. 00:03:33.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.892 EAL: Restoring previous memory policy: 0 00:03:33.892 EAL: request: mp_malloc_sync 00:03:33.892 EAL: No shared files mode enabled, IPC is disabled 00:03:33.892 EAL: Heap on socket 0 was expanded by 2MB 00:03:33.892 EAL: No shared files mode enabled, IPC is disabled 00:03:34.152 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.152 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.152 00:03:34.152 00:03:34.152 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.152 http://cunit.sourceforge.net/ 00:03:34.152 00:03:34.152 00:03:34.152 Suite: components_suite 00:03:34.152 Test: vtophys_malloc_test ...passed 00:03:34.152 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.152 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.152 EAL: Restoring previous memory policy: 4 00:03:34.152 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.152 EAL: request: mp_malloc_sync 00:03:34.152 EAL: No shared files mode enabled, IPC is disabled 00:03:34.152 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.152 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.152 EAL: request: mp_malloc_sync 00:03:34.152 EAL: No shared files mode enabled, IPC is disabled 00:03:34.152 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.152 EAL: Trying to obtain current memory policy. 00:03:34.152 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.152 EAL: Restoring previous memory policy: 4 00:03:34.152 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.152 EAL: request: mp_malloc_sync 00:03:34.152 EAL: No shared files mode enabled, IPC is disabled 00:03:34.152 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.152 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.152 EAL: request: mp_malloc_sync 00:03:34.152 EAL: No shared files mode enabled, IPC is disabled 00:03:34.152 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.152 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.153 EAL: Restoring previous memory policy: 4 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.153 EAL: request: mp_malloc_sync 00:03:34.153 EAL: No shared files mode enabled, IPC is disabled 00:03:34.153 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.153 EAL: Trying to obtain current memory policy. 00:03:34.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.413 EAL: Restoring previous memory policy: 4 00:03:34.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.413 EAL: request: mp_malloc_sync 00:03:34.413 EAL: No shared files mode enabled, IPC is disabled 00:03:34.413 EAL: Heap on socket 0 was expanded by 514MB 00:03:34.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.413 EAL: request: mp_malloc_sync 00:03:34.413 EAL: No shared files mode enabled, IPC is disabled 00:03:34.413 EAL: Heap on socket 0 was shrunk by 514MB 00:03:34.413 EAL: Trying to obtain current memory policy. 00:03:34.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.672 EAL: Restoring previous memory policy: 4 00:03:34.672 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.672 EAL: request: mp_malloc_sync 00:03:34.672 EAL: No shared files mode enabled, IPC is disabled 00:03:34.672 EAL: Heap on socket 0 was expanded by 1026MB 00:03:34.930 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.930 EAL: request: mp_malloc_sync 00:03:34.930 EAL: No shared files mode enabled, IPC is disabled 00:03:34.930 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:34.930 passed 00:03:34.930 00:03:34.930 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.931 suites 1 1 n/a 0 0 00:03:34.931 tests 2 2 2 0 0 00:03:34.931 asserts 497 497 497 0 n/a 00:03:34.931 00:03:34.931 Elapsed time = 0.977 seconds 00:03:34.931 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.931 EAL: request: mp_malloc_sync 00:03:34.931 EAL: No shared files mode enabled, IPC is disabled 00:03:34.931 EAL: Heap on socket 0 was shrunk by 2MB 00:03:34.931 EAL: No shared files mode enabled, IPC is disabled 00:03:34.931 EAL: No shared files mode enabled, IPC is disabled 00:03:34.931 EAL: No shared files mode enabled, IPC is disabled 00:03:35.189 00:03:35.189 real 0m1.106s 00:03:35.189 user 0m0.653s 00:03:35.189 sys 0m0.427s 00:03:35.189 15:12:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.189 15:12:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.189 ************************************ 00:03:35.189 END TEST env_vtophys 00:03:35.189 ************************************ 00:03:35.189 15:12:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.189 15:12:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.189 15:12:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.189 15:12:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.190 ************************************ 00:03:35.190 START TEST env_pci 00:03:35.190 ************************************ 00:03:35.190 15:12:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.190 00:03:35.190 00:03:35.190 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.190 http://cunit.sourceforge.net/ 00:03:35.190 00:03:35.190 00:03:35.190 Suite: pci 00:03:35.190 Test: pci_hook ...[2024-11-20 15:12:38.920802] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1964914 has claimed it 00:03:35.190 EAL: Cannot find device (10000:00:01.0) 00:03:35.190 EAL: Failed to attach device on primary process 00:03:35.190 passed 00:03:35.190 00:03:35.190 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.190 suites 1 1 n/a 0 0 00:03:35.190 tests 1 1 1 0 0 00:03:35.190 asserts 25 25 25 0 n/a 00:03:35.190 00:03:35.190 Elapsed time = 0.027 seconds 00:03:35.190 00:03:35.190 real 0m0.046s 00:03:35.190 user 0m0.014s 00:03:35.190 sys 0m0.031s 00:03:35.190 15:12:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.190 15:12:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.190 ************************************ 00:03:35.190 END TEST env_pci 00:03:35.190 ************************************ 00:03:35.190 15:12:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.190 15:12:38 env -- env/env.sh@15 -- # uname 00:03:35.190 15:12:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.190 15:12:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.190 15:12:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.190 15:12:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:35.190 15:12:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.190 15:12:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.190 ************************************ 00:03:35.190 START TEST env_dpdk_post_init 00:03:35.190 ************************************ 00:03:35.190 15:12:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.190 EAL: Detected CPU lcores: 96 00:03:35.190 EAL: Detected NUMA nodes: 2 00:03:35.190 EAL: Detected shared linkage of DPDK 00:03:35.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.190 EAL: Selected IOVA mode 'VA' 00:03:35.190 EAL: VFIO support initialized 00:03:35.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.448 EAL: Using IOMMU type 1 (Type 1) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:35.448 EAL: Ignore mapping IO port bar(1) 00:03:35.448 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:36.387 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:36.387 EAL: Ignore mapping IO port bar(1) 00:03:36.387 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:39.675 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:39.675 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:39.675 Starting DPDK initialization... 00:03:39.675 Starting SPDK post initialization... 00:03:39.675 SPDK NVMe probe 00:03:39.675 Attaching to 0000:5e:00.0 00:03:39.675 Attached to 0000:5e:00.0 00:03:39.675 Cleaning up... 00:03:39.675 00:03:39.675 real 0m4.388s 00:03:39.675 user 0m3.008s 00:03:39.675 sys 0m0.454s 00:03:39.675 15:12:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.675 15:12:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.675 ************************************ 00:03:39.675 END TEST env_dpdk_post_init 00:03:39.675 ************************************ 00:03:39.675 15:12:43 env -- env/env.sh@26 -- # uname 00:03:39.675 15:12:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:39.675 15:12:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.675 15:12:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.675 15:12:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.675 15:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.675 ************************************ 00:03:39.675 START TEST env_mem_callbacks 00:03:39.675 ************************************ 00:03:39.675 15:12:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.675 EAL: Detected CPU lcores: 96 00:03:39.675 EAL: Detected NUMA nodes: 2 00:03:39.675 EAL: Detected shared linkage of DPDK 00:03:39.675 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.675 EAL: Selected IOVA mode 'VA' 00:03:39.675 EAL: VFIO support initialized 00:03:39.675 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.675 00:03:39.675 00:03:39.675 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.675 http://cunit.sourceforge.net/ 00:03:39.675 00:03:39.675 00:03:39.675 Suite: memory 00:03:39.675 Test: test ... 00:03:39.675 register 0x200000200000 2097152 00:03:39.675 malloc 3145728 00:03:39.675 register 0x200000400000 4194304 00:03:39.675 buf 0x200000500000 len 3145728 PASSED 00:03:39.675 malloc 64 00:03:39.675 buf 0x2000004fff40 len 64 PASSED 00:03:39.675 malloc 4194304 00:03:39.675 register 0x200000800000 6291456 00:03:39.675 buf 0x200000a00000 len 4194304 PASSED 00:03:39.675 free 0x200000500000 3145728 00:03:39.675 free 0x2000004fff40 64 00:03:39.675 unregister 0x200000400000 4194304 PASSED 00:03:39.675 free 0x200000a00000 4194304 00:03:39.675 unregister 0x200000800000 6291456 PASSED 00:03:39.675 malloc 8388608 00:03:39.675 register 0x200000400000 10485760 00:03:39.675 buf 0x200000600000 len 8388608 PASSED 00:03:39.675 free 0x200000600000 8388608 00:03:39.675 unregister 0x200000400000 10485760 PASSED 00:03:39.675 passed 00:03:39.675 00:03:39.675 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.675 suites 1 1 n/a 0 0 00:03:39.675 tests 1 1 1 0 0 00:03:39.675 asserts 15 15 15 0 n/a 00:03:39.675 00:03:39.675 Elapsed time = 0.008 seconds 00:03:39.675 00:03:39.675 real 0m0.061s 00:03:39.675 user 0m0.018s 00:03:39.675 sys 0m0.043s 00:03:39.675 15:12:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.675 15:12:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:39.675 ************************************ 00:03:39.675 END TEST env_mem_callbacks 00:03:39.675 ************************************ 00:03:39.935 00:03:39.935 real 0m6.285s 00:03:39.935 user 0m4.085s 00:03:39.935 sys 0m1.280s 00:03:39.935 15:12:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.935 15:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.935 ************************************ 00:03:39.935 END TEST env 00:03:39.935 ************************************ 00:03:39.935 15:12:43 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.935 15:12:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.935 15:12:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.935 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:03:39.935 ************************************ 00:03:39.935 START TEST rpc 00:03:39.935 ************************************ 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.935 * Looking for test storage... 00:03:39.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.935 15:12:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.935 15:12:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.935 15:12:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.935 15:12:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.935 15:12:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.935 15:12:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.935 15:12:43 rpc -- scripts/common.sh@345 -- # : 1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.935 15:12:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.935 15:12:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.935 15:12:43 rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.935 15:12:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.935 15:12:43 rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.935 15:12:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.935 15:12:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.935 15:12:43 rpc -- scripts/common.sh@368 -- # return 0 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:39.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.935 --rc genhtml_branch_coverage=1 00:03:39.935 --rc genhtml_function_coverage=1 00:03:39.935 --rc genhtml_legend=1 00:03:39.935 --rc geninfo_all_blocks=1 00:03:39.935 --rc geninfo_unexecuted_blocks=1 00:03:39.935 00:03:39.935 ' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:39.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.935 --rc genhtml_branch_coverage=1 00:03:39.935 --rc genhtml_function_coverage=1 00:03:39.935 --rc genhtml_legend=1 00:03:39.935 --rc geninfo_all_blocks=1 00:03:39.935 --rc geninfo_unexecuted_blocks=1 00:03:39.935 00:03:39.935 ' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:39.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.935 --rc genhtml_branch_coverage=1 00:03:39.935 --rc genhtml_function_coverage=1 00:03:39.935 --rc genhtml_legend=1 00:03:39.935 --rc geninfo_all_blocks=1 00:03:39.935 --rc geninfo_unexecuted_blocks=1 00:03:39.935 00:03:39.935 ' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:39.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.935 --rc genhtml_branch_coverage=1 00:03:39.935 --rc genhtml_function_coverage=1 00:03:39.935 --rc genhtml_legend=1 00:03:39.935 --rc geninfo_all_blocks=1 00:03:39.935 --rc geninfo_unexecuted_blocks=1 00:03:39.935 00:03:39.935 ' 00:03:39.935 15:12:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1965805 00:03:39.935 15:12:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:39.935 15:12:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.935 15:12:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1965805 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 1965805 ']' 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:39.935 15:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.195 [2024-11-20 15:12:43.882372] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:40.195 [2024-11-20 15:12:43.882423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965805 ] 00:03:40.195 [2024-11-20 15:12:43.956111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.195 [2024-11-20 15:12:43.995719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:40.195 [2024-11-20 15:12:43.995757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1965805' to capture a snapshot of events at runtime. 00:03:40.195 [2024-11-20 15:12:43.995766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:40.195 [2024-11-20 15:12:43.995771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:40.195 [2024-11-20 15:12:43.995776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1965805 for offline analysis/debug. 00:03:40.195 [2024-11-20 15:12:43.996360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.133 15:12:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.133 15:12:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:41.133 15:12:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.133 15:12:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.133 15:12:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:41.133 15:12:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:41.133 15:12:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.133 15:12:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.133 15:12:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 ************************************ 00:03:41.133 START TEST rpc_integrity 00:03:41.133 ************************************ 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.133 { 00:03:41.133 "name": "Malloc0", 00:03:41.133 "aliases": [ 00:03:41.133 "085f956d-43af-4469-9e18-acc4c4a01b1e" 00:03:41.133 ], 00:03:41.133 "product_name": "Malloc disk", 00:03:41.133 "block_size": 512, 00:03:41.133 "num_blocks": 16384, 00:03:41.133 "uuid": "085f956d-43af-4469-9e18-acc4c4a01b1e", 00:03:41.133 "assigned_rate_limits": { 00:03:41.133 "rw_ios_per_sec": 0, 00:03:41.133 "rw_mbytes_per_sec": 0, 00:03:41.133 "r_mbytes_per_sec": 0, 00:03:41.133 "w_mbytes_per_sec": 0 00:03:41.133 }, 00:03:41.133 "claimed": false, 00:03:41.133 "zoned": false, 00:03:41.133 "supported_io_types": { 00:03:41.133 "read": true, 00:03:41.133 "write": true, 00:03:41.133 "unmap": true, 00:03:41.133 "flush": true, 00:03:41.133 "reset": true, 00:03:41.133 "nvme_admin": false, 00:03:41.133 "nvme_io": false, 00:03:41.133 "nvme_io_md": false, 00:03:41.133 "write_zeroes": true, 00:03:41.133 "zcopy": true, 00:03:41.133 "get_zone_info": false, 00:03:41.133 "zone_management": false, 00:03:41.133 "zone_append": false, 00:03:41.133 "compare": false, 00:03:41.133 "compare_and_write": false, 00:03:41.133 "abort": true, 00:03:41.133 "seek_hole": false, 00:03:41.133 "seek_data": false, 00:03:41.133 "copy": true, 00:03:41.133 "nvme_iov_md": false 00:03:41.133 }, 00:03:41.133 "memory_domains": [ 00:03:41.133 { 00:03:41.133 "dma_device_id": "system", 00:03:41.133 "dma_device_type": 1 00:03:41.133 }, 00:03:41.133 { 00:03:41.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.133 "dma_device_type": 2 00:03:41.133 } 00:03:41.133 ], 00:03:41.133 "driver_specific": {} 00:03:41.133 } 00:03:41.133 ]' 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 [2024-11-20 15:12:44.870665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:41.133 [2024-11-20 15:12:44.870698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.133 [2024-11-20 15:12:44.870710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b56e0 00:03:41.133 [2024-11-20 15:12:44.870717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.133 [2024-11-20 15:12:44.871818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.133 [2024-11-20 15:12:44.871840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.133 Passthru0 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.133 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.133 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.133 { 00:03:41.133 "name": "Malloc0", 00:03:41.133 "aliases": [ 00:03:41.133 "085f956d-43af-4469-9e18-acc4c4a01b1e" 00:03:41.133 ], 00:03:41.133 "product_name": "Malloc disk", 00:03:41.133 "block_size": 512, 00:03:41.133 "num_blocks": 16384, 00:03:41.133 "uuid": "085f956d-43af-4469-9e18-acc4c4a01b1e", 00:03:41.133 "assigned_rate_limits": { 00:03:41.133 "rw_ios_per_sec": 0, 00:03:41.133 "rw_mbytes_per_sec": 0, 00:03:41.133 "r_mbytes_per_sec": 0, 00:03:41.133 "w_mbytes_per_sec": 0 00:03:41.133 }, 00:03:41.133 "claimed": true, 00:03:41.133 "claim_type": "exclusive_write", 00:03:41.133 "zoned": false, 00:03:41.133 "supported_io_types": { 00:03:41.133 "read": true, 00:03:41.133 "write": true, 00:03:41.133 "unmap": true, 00:03:41.133 "flush": true, 00:03:41.133 "reset": true, 00:03:41.133 "nvme_admin": false, 00:03:41.133 "nvme_io": false, 00:03:41.133 "nvme_io_md": false, 00:03:41.133 "write_zeroes": true, 00:03:41.133 "zcopy": true, 00:03:41.133 "get_zone_info": false, 00:03:41.133 "zone_management": false, 00:03:41.133 "zone_append": false, 00:03:41.133 "compare": false, 00:03:41.133 "compare_and_write": false, 00:03:41.133 "abort": true, 00:03:41.134 "seek_hole": false, 00:03:41.134 "seek_data": false, 00:03:41.134 "copy": true, 00:03:41.134 "nvme_iov_md": false 00:03:41.134 }, 00:03:41.134 "memory_domains": [ 00:03:41.134 { 00:03:41.134 "dma_device_id": "system", 00:03:41.134 "dma_device_type": 1 00:03:41.134 }, 00:03:41.134 { 00:03:41.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.134 "dma_device_type": 2 00:03:41.134 } 00:03:41.134 ], 00:03:41.134 "driver_specific": {} 00:03:41.134 }, 00:03:41.134 { 00:03:41.134 "name": "Passthru0", 00:03:41.134 "aliases": [ 00:03:41.134 "80d97ef4-6d91-54a4-bf09-0fa8041217dc" 00:03:41.134 ], 00:03:41.134 "product_name": "passthru", 00:03:41.134 "block_size": 512, 00:03:41.134 "num_blocks": 16384, 00:03:41.134 "uuid": "80d97ef4-6d91-54a4-bf09-0fa8041217dc", 00:03:41.134 "assigned_rate_limits": { 00:03:41.134 "rw_ios_per_sec": 0, 00:03:41.134 "rw_mbytes_per_sec": 0, 00:03:41.134 "r_mbytes_per_sec": 0, 00:03:41.134 "w_mbytes_per_sec": 0 00:03:41.134 }, 00:03:41.134 "claimed": false, 00:03:41.134 "zoned": false, 00:03:41.134 "supported_io_types": { 00:03:41.134 "read": true, 00:03:41.134 "write": true, 00:03:41.134 "unmap": true, 00:03:41.134 "flush": true, 00:03:41.134 "reset": true, 00:03:41.134 "nvme_admin": false, 00:03:41.134 "nvme_io": false, 00:03:41.134 "nvme_io_md": false, 00:03:41.134 "write_zeroes": true, 00:03:41.134 "zcopy": true, 00:03:41.134 "get_zone_info": false, 00:03:41.134 "zone_management": false, 00:03:41.134 "zone_append": false, 00:03:41.134 "compare": false, 00:03:41.134 "compare_and_write": false, 00:03:41.134 "abort": true, 00:03:41.134 "seek_hole": false, 00:03:41.134 "seek_data": false, 00:03:41.134 "copy": true, 00:03:41.134 "nvme_iov_md": false 00:03:41.134 }, 00:03:41.134 "memory_domains": [ 00:03:41.134 { 00:03:41.134 "dma_device_id": "system", 00:03:41.134 "dma_device_type": 1 00:03:41.134 }, 00:03:41.134 { 00:03:41.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.134 "dma_device_type": 2 00:03:41.134 } 00:03:41.134 ], 00:03:41.134 "driver_specific": { 00:03:41.134 "passthru": { 00:03:41.134 "name": "Passthru0", 00:03:41.134 "base_bdev_name": "Malloc0" 00:03:41.134 } 00:03:41.134 } 00:03:41.134 } 00:03:41.134 ]' 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.134 15:12:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.134 15:12:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.134 15:12:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.134 00:03:41.134 real 0m0.277s 00:03:41.134 user 0m0.173s 00:03:41.134 sys 0m0.038s 00:03:41.134 15:12:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.134 15:12:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.134 ************************************ 00:03:41.134 END TEST rpc_integrity 00:03:41.134 ************************************ 00:03:41.394 15:12:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 ************************************ 00:03:41.394 START TEST rpc_plugins 00:03:41.394 ************************************ 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.394 { 00:03:41.394 "name": "Malloc1", 00:03:41.394 "aliases": [ 00:03:41.394 "0cbd04fc-0e4e-4b7c-b2eb-454eaac44d84" 00:03:41.394 ], 00:03:41.394 "product_name": "Malloc disk", 00:03:41.394 "block_size": 4096, 00:03:41.394 "num_blocks": 256, 00:03:41.394 "uuid": "0cbd04fc-0e4e-4b7c-b2eb-454eaac44d84", 00:03:41.394 "assigned_rate_limits": { 00:03:41.394 "rw_ios_per_sec": 0, 00:03:41.394 "rw_mbytes_per_sec": 0, 00:03:41.394 "r_mbytes_per_sec": 0, 00:03:41.394 "w_mbytes_per_sec": 0 00:03:41.394 }, 00:03:41.394 "claimed": false, 00:03:41.394 "zoned": false, 00:03:41.394 "supported_io_types": { 00:03:41.394 "read": true, 00:03:41.394 "write": true, 00:03:41.394 "unmap": true, 00:03:41.394 "flush": true, 00:03:41.394 "reset": true, 00:03:41.394 "nvme_admin": false, 00:03:41.394 "nvme_io": false, 00:03:41.394 "nvme_io_md": false, 00:03:41.394 "write_zeroes": true, 00:03:41.394 "zcopy": true, 00:03:41.394 "get_zone_info": false, 00:03:41.394 "zone_management": false, 00:03:41.394 "zone_append": false, 00:03:41.394 "compare": false, 00:03:41.394 "compare_and_write": false, 00:03:41.394 "abort": true, 00:03:41.394 "seek_hole": false, 00:03:41.394 "seek_data": false, 00:03:41.394 "copy": true, 00:03:41.394 "nvme_iov_md": false 00:03:41.394 }, 00:03:41.394 "memory_domains": [ 00:03:41.394 { 00:03:41.394 "dma_device_id": "system", 00:03:41.394 "dma_device_type": 1 00:03:41.394 }, 00:03:41.394 { 00:03:41.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.394 "dma_device_type": 2 00:03:41.394 } 00:03:41.394 ], 00:03:41.394 "driver_specific": {} 00:03:41.394 } 00:03:41.394 ]' 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:41.394 15:12:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.394 00:03:41.394 real 0m0.144s 00:03:41.394 user 0m0.090s 00:03:41.394 sys 0m0.017s 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.394 15:12:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.394 ************************************ 00:03:41.394 END TEST rpc_plugins 00:03:41.394 ************************************ 00:03:41.394 15:12:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.394 15:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.653 ************************************ 00:03:41.653 START TEST rpc_trace_cmd_test 00:03:41.653 ************************************ 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:41.653 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1965805", 00:03:41.653 "tpoint_group_mask": "0x8", 00:03:41.653 "iscsi_conn": { 00:03:41.653 "mask": "0x2", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "scsi": { 00:03:41.653 "mask": "0x4", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "bdev": { 00:03:41.653 "mask": "0x8", 00:03:41.653 "tpoint_mask": "0xffffffffffffffff" 00:03:41.653 }, 00:03:41.653 "nvmf_rdma": { 00:03:41.653 "mask": "0x10", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "nvmf_tcp": { 00:03:41.653 "mask": "0x20", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "ftl": { 00:03:41.653 "mask": "0x40", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "blobfs": { 00:03:41.653 "mask": "0x80", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "dsa": { 00:03:41.653 "mask": "0x200", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "thread": { 00:03:41.653 "mask": "0x400", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "nvme_pcie": { 00:03:41.653 "mask": "0x800", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "iaa": { 00:03:41.653 "mask": "0x1000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "nvme_tcp": { 00:03:41.653 "mask": "0x2000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "bdev_nvme": { 00:03:41.653 "mask": "0x4000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "sock": { 00:03:41.653 "mask": "0x8000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "blob": { 00:03:41.653 "mask": "0x10000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "bdev_raid": { 00:03:41.653 "mask": "0x20000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 }, 00:03:41.653 "scheduler": { 00:03:41.653 "mask": "0x40000", 00:03:41.653 "tpoint_mask": "0x0" 00:03:41.653 } 00:03:41.653 }' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:41.653 00:03:41.653 real 0m0.223s 00:03:41.653 user 0m0.188s 00:03:41.653 sys 0m0.027s 00:03:41.653 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.654 15:12:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.654 ************************************ 00:03:41.654 END TEST rpc_trace_cmd_test 00:03:41.654 ************************************ 00:03:41.654 15:12:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:41.654 15:12:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:41.654 15:12:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:41.654 15:12:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.654 15:12:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.654 15:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.912 ************************************ 00:03:41.912 START TEST rpc_daemon_integrity 00:03:41.912 ************************************ 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.912 { 00:03:41.912 "name": "Malloc2", 00:03:41.912 "aliases": [ 00:03:41.912 "0f713c0c-b9f0-42eb-b98e-e6813372ffc4" 00:03:41.912 ], 00:03:41.912 "product_name": "Malloc disk", 00:03:41.912 "block_size": 512, 00:03:41.912 "num_blocks": 16384, 00:03:41.912 "uuid": "0f713c0c-b9f0-42eb-b98e-e6813372ffc4", 00:03:41.912 "assigned_rate_limits": { 00:03:41.912 "rw_ios_per_sec": 0, 00:03:41.912 "rw_mbytes_per_sec": 0, 00:03:41.912 "r_mbytes_per_sec": 0, 00:03:41.912 "w_mbytes_per_sec": 0 00:03:41.912 }, 00:03:41.912 "claimed": false, 00:03:41.912 "zoned": false, 00:03:41.912 "supported_io_types": { 00:03:41.912 "read": true, 00:03:41.912 "write": true, 00:03:41.912 "unmap": true, 00:03:41.912 "flush": true, 00:03:41.912 "reset": true, 00:03:41.912 "nvme_admin": false, 00:03:41.912 "nvme_io": false, 00:03:41.912 "nvme_io_md": false, 00:03:41.912 "write_zeroes": true, 00:03:41.912 "zcopy": true, 00:03:41.912 "get_zone_info": false, 00:03:41.912 "zone_management": false, 00:03:41.912 "zone_append": false, 00:03:41.912 "compare": false, 00:03:41.912 "compare_and_write": false, 00:03:41.912 "abort": true, 00:03:41.912 "seek_hole": false, 00:03:41.912 "seek_data": false, 00:03:41.912 "copy": true, 00:03:41.912 "nvme_iov_md": false 00:03:41.912 }, 00:03:41.912 "memory_domains": [ 00:03:41.912 { 00:03:41.912 "dma_device_id": "system", 00:03:41.912 "dma_device_type": 1 00:03:41.912 }, 00:03:41.912 { 00:03:41.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.912 "dma_device_type": 2 00:03:41.912 } 00:03:41.912 ], 00:03:41.912 "driver_specific": {} 00:03:41.912 } 00:03:41.912 ]' 00:03:41.912 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.913 [2024-11-20 15:12:45.721000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:41.913 [2024-11-20 15:12:45.721029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.913 [2024-11-20 15:12:45.721041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2245b70 00:03:41.913 [2024-11-20 15:12:45.721047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.913 [2024-11-20 15:12:45.722031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.913 [2024-11-20 15:12:45.722054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.913 Passthru0 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.913 { 00:03:41.913 "name": "Malloc2", 00:03:41.913 "aliases": [ 00:03:41.913 "0f713c0c-b9f0-42eb-b98e-e6813372ffc4" 00:03:41.913 ], 00:03:41.913 "product_name": "Malloc disk", 00:03:41.913 "block_size": 512, 00:03:41.913 "num_blocks": 16384, 00:03:41.913 "uuid": "0f713c0c-b9f0-42eb-b98e-e6813372ffc4", 00:03:41.913 "assigned_rate_limits": { 00:03:41.913 "rw_ios_per_sec": 0, 00:03:41.913 "rw_mbytes_per_sec": 0, 00:03:41.913 "r_mbytes_per_sec": 0, 00:03:41.913 "w_mbytes_per_sec": 0 00:03:41.913 }, 00:03:41.913 "claimed": true, 00:03:41.913 "claim_type": "exclusive_write", 00:03:41.913 "zoned": false, 00:03:41.913 "supported_io_types": { 00:03:41.913 "read": true, 00:03:41.913 "write": true, 00:03:41.913 "unmap": true, 00:03:41.913 "flush": true, 00:03:41.913 "reset": true, 00:03:41.913 "nvme_admin": false, 00:03:41.913 "nvme_io": false, 00:03:41.913 "nvme_io_md": false, 00:03:41.913 "write_zeroes": true, 00:03:41.913 "zcopy": true, 00:03:41.913 "get_zone_info": false, 00:03:41.913 "zone_management": false, 00:03:41.913 "zone_append": false, 00:03:41.913 "compare": false, 00:03:41.913 "compare_and_write": false, 00:03:41.913 "abort": true, 00:03:41.913 "seek_hole": false, 00:03:41.913 "seek_data": false, 00:03:41.913 "copy": true, 00:03:41.913 "nvme_iov_md": false 00:03:41.913 }, 00:03:41.913 "memory_domains": [ 00:03:41.913 { 00:03:41.913 "dma_device_id": "system", 00:03:41.913 "dma_device_type": 1 00:03:41.913 }, 00:03:41.913 { 00:03:41.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.913 "dma_device_type": 2 00:03:41.913 } 00:03:41.913 ], 00:03:41.913 "driver_specific": {} 00:03:41.913 }, 00:03:41.913 { 00:03:41.913 "name": "Passthru0", 00:03:41.913 "aliases": [ 00:03:41.913 "80fae126-67d9-5146-ac4e-ac1f2f17cea8" 00:03:41.913 ], 00:03:41.913 "product_name": "passthru", 00:03:41.913 "block_size": 512, 00:03:41.913 "num_blocks": 16384, 00:03:41.913 "uuid": "80fae126-67d9-5146-ac4e-ac1f2f17cea8", 00:03:41.913 "assigned_rate_limits": { 00:03:41.913 "rw_ios_per_sec": 0, 00:03:41.913 "rw_mbytes_per_sec": 0, 00:03:41.913 "r_mbytes_per_sec": 0, 00:03:41.913 "w_mbytes_per_sec": 0 00:03:41.913 }, 00:03:41.913 "claimed": false, 00:03:41.913 "zoned": false, 00:03:41.913 "supported_io_types": { 00:03:41.913 "read": true, 00:03:41.913 "write": true, 00:03:41.913 "unmap": true, 00:03:41.913 "flush": true, 00:03:41.913 "reset": true, 00:03:41.913 "nvme_admin": false, 00:03:41.913 "nvme_io": false, 00:03:41.913 "nvme_io_md": false, 00:03:41.913 "write_zeroes": true, 00:03:41.913 "zcopy": true, 00:03:41.913 "get_zone_info": false, 00:03:41.913 "zone_management": false, 00:03:41.913 "zone_append": false, 00:03:41.913 "compare": false, 00:03:41.913 "compare_and_write": false, 00:03:41.913 "abort": true, 00:03:41.913 "seek_hole": false, 00:03:41.913 "seek_data": false, 00:03:41.913 "copy": true, 00:03:41.913 "nvme_iov_md": false 00:03:41.913 }, 00:03:41.913 "memory_domains": [ 00:03:41.913 { 00:03:41.913 "dma_device_id": "system", 00:03:41.913 "dma_device_type": 1 00:03:41.913 }, 00:03:41.913 { 00:03:41.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.913 "dma_device_type": 2 00:03:41.913 } 00:03:41.913 ], 00:03:41.913 "driver_specific": { 00:03:41.913 "passthru": { 00:03:41.913 "name": "Passthru0", 00:03:41.913 "base_bdev_name": "Malloc2" 00:03:41.913 } 00:03:41.913 } 00:03:41.913 } 00:03:41.913 ]' 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.913 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.172 00:03:42.172 real 0m0.276s 00:03:42.172 user 0m0.174s 00:03:42.172 sys 0m0.037s 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.172 15:12:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.172 ************************************ 00:03:42.172 END TEST rpc_daemon_integrity 00:03:42.172 ************************************ 00:03:42.172 15:12:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.172 15:12:45 rpc -- rpc/rpc.sh@84 -- # killprocess 1965805 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@954 -- # '[' -z 1965805 ']' 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@958 -- # kill -0 1965805 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@959 -- # uname 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965805 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965805' 00:03:42.172 killing process with pid 1965805 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@973 -- # kill 1965805 00:03:42.172 15:12:45 rpc -- common/autotest_common.sh@978 -- # wait 1965805 00:03:42.432 00:03:42.432 real 0m2.595s 00:03:42.432 user 0m3.347s 00:03:42.432 sys 0m0.697s 00:03:42.432 15:12:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.432 15:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.432 ************************************ 00:03:42.432 END TEST rpc 00:03:42.432 ************************************ 00:03:42.432 15:12:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.432 15:12:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.432 15:12:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.432 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:03:42.432 ************************************ 00:03:42.432 START TEST skip_rpc 00:03:42.432 ************************************ 00:03:42.432 15:12:46 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.692 * Looking for test storage... 00:03:42.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.692 15:12:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.692 --rc genhtml_branch_coverage=1 00:03:42.692 --rc genhtml_function_coverage=1 00:03:42.692 --rc genhtml_legend=1 00:03:42.692 --rc geninfo_all_blocks=1 00:03:42.692 --rc geninfo_unexecuted_blocks=1 00:03:42.692 00:03:42.692 ' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.692 --rc genhtml_branch_coverage=1 00:03:42.692 --rc genhtml_function_coverage=1 00:03:42.692 --rc genhtml_legend=1 00:03:42.692 --rc geninfo_all_blocks=1 00:03:42.692 --rc geninfo_unexecuted_blocks=1 00:03:42.692 00:03:42.692 ' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.692 --rc genhtml_branch_coverage=1 00:03:42.692 --rc genhtml_function_coverage=1 00:03:42.692 --rc genhtml_legend=1 00:03:42.692 --rc geninfo_all_blocks=1 00:03:42.692 --rc geninfo_unexecuted_blocks=1 00:03:42.692 00:03:42.692 ' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.692 --rc genhtml_branch_coverage=1 00:03:42.692 --rc genhtml_function_coverage=1 00:03:42.692 --rc genhtml_legend=1 00:03:42.692 --rc geninfo_all_blocks=1 00:03:42.692 --rc geninfo_unexecuted_blocks=1 00:03:42.692 00:03:42.692 ' 00:03:42.692 15:12:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.692 15:12:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.692 15:12:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.692 15:12:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.692 ************************************ 00:03:42.692 START TEST skip_rpc 00:03:42.692 ************************************ 00:03:42.692 15:12:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:42.692 15:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1966443 00:03:42.692 15:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.692 15:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:42.692 15:12:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:42.692 [2024-11-20 15:12:46.584825] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:42.692 [2024-11-20 15:12:46.584861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966443 ] 00:03:42.951 [2024-11-20 15:12:46.656848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.951 [2024-11-20 15:12:46.697273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1966443 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1966443 ']' 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1966443 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966443 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966443' 00:03:48.218 killing process with pid 1966443 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1966443 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1966443 00:03:48.218 00:03:48.218 real 0m5.363s 00:03:48.218 user 0m5.120s 00:03:48.218 sys 0m0.278s 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.218 15:12:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 ************************************ 00:03:48.218 END TEST skip_rpc 00:03:48.218 ************************************ 00:03:48.218 15:12:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:48.218 15:12:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.218 15:12:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.218 15:12:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 ************************************ 00:03:48.218 START TEST skip_rpc_with_json 00:03:48.218 ************************************ 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1967390 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1967390 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1967390 ']' 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.218 15:12:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 [2024-11-20 15:12:52.020614] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:48.218 [2024-11-20 15:12:52.020659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967390 ] 00:03:48.218 [2024-11-20 15:12:52.096660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.477 [2024-11-20 15:12:52.137135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.477 [2024-11-20 15:12:52.355469] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:48.477 request: 00:03:48.477 { 00:03:48.477 "trtype": "tcp", 00:03:48.477 "method": "nvmf_get_transports", 00:03:48.477 "req_id": 1 00:03:48.477 } 00:03:48.477 Got JSON-RPC error response 00:03:48.477 response: 00:03:48.477 { 00:03:48.477 "code": -19, 00:03:48.477 "message": "No such device" 00:03:48.477 } 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.477 [2024-11-20 15:12:52.367578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.477 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.736 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.736 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.736 { 00:03:48.736 "subsystems": [ 00:03:48.736 { 00:03:48.736 "subsystem": "fsdev", 00:03:48.736 "config": [ 00:03:48.736 { 00:03:48.736 "method": "fsdev_set_opts", 00:03:48.736 "params": { 00:03:48.736 "fsdev_io_pool_size": 65535, 00:03:48.736 "fsdev_io_cache_size": 256 00:03:48.736 } 00:03:48.736 } 00:03:48.736 ] 00:03:48.736 }, 00:03:48.736 { 00:03:48.736 "subsystem": "vfio_user_target", 00:03:48.736 "config": null 00:03:48.736 }, 00:03:48.736 { 00:03:48.736 "subsystem": "keyring", 00:03:48.736 "config": [] 00:03:48.736 }, 00:03:48.736 { 00:03:48.736 "subsystem": "iobuf", 00:03:48.736 "config": [ 00:03:48.736 { 00:03:48.736 "method": "iobuf_set_options", 00:03:48.736 "params": { 00:03:48.736 "small_pool_count": 8192, 00:03:48.736 "large_pool_count": 1024, 00:03:48.736 "small_bufsize": 8192, 00:03:48.737 "large_bufsize": 135168, 00:03:48.737 "enable_numa": false 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "sock", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "sock_set_default_impl", 00:03:48.737 "params": { 00:03:48.737 "impl_name": "posix" 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "sock_impl_set_options", 00:03:48.737 "params": { 00:03:48.737 "impl_name": "ssl", 00:03:48.737 "recv_buf_size": 4096, 00:03:48.737 "send_buf_size": 4096, 00:03:48.737 "enable_recv_pipe": true, 00:03:48.737 "enable_quickack": false, 00:03:48.737 "enable_placement_id": 0, 00:03:48.737 "enable_zerocopy_send_server": true, 00:03:48.737 "enable_zerocopy_send_client": false, 00:03:48.737 "zerocopy_threshold": 0, 00:03:48.737 "tls_version": 0, 00:03:48.737 "enable_ktls": false 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "sock_impl_set_options", 00:03:48.737 "params": { 00:03:48.737 "impl_name": "posix", 00:03:48.737 "recv_buf_size": 2097152, 00:03:48.737 "send_buf_size": 2097152, 00:03:48.737 "enable_recv_pipe": true, 00:03:48.737 "enable_quickack": false, 00:03:48.737 "enable_placement_id": 0, 00:03:48.737 "enable_zerocopy_send_server": true, 00:03:48.737 "enable_zerocopy_send_client": false, 00:03:48.737 "zerocopy_threshold": 0, 00:03:48.737 "tls_version": 0, 00:03:48.737 "enable_ktls": false 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "vmd", 00:03:48.737 "config": [] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "accel", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "accel_set_options", 00:03:48.737 "params": { 00:03:48.737 "small_cache_size": 128, 00:03:48.737 "large_cache_size": 16, 00:03:48.737 "task_count": 2048, 00:03:48.737 "sequence_count": 2048, 00:03:48.737 "buf_count": 2048 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "bdev", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "bdev_set_options", 00:03:48.737 "params": { 00:03:48.737 "bdev_io_pool_size": 65535, 00:03:48.737 "bdev_io_cache_size": 256, 00:03:48.737 "bdev_auto_examine": true, 00:03:48.737 "iobuf_small_cache_size": 128, 00:03:48.737 "iobuf_large_cache_size": 16 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "bdev_raid_set_options", 00:03:48.737 "params": { 00:03:48.737 "process_window_size_kb": 1024, 00:03:48.737 "process_max_bandwidth_mb_sec": 0 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "bdev_iscsi_set_options", 00:03:48.737 "params": { 00:03:48.737 "timeout_sec": 30 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "bdev_nvme_set_options", 00:03:48.737 "params": { 00:03:48.737 "action_on_timeout": "none", 00:03:48.737 "timeout_us": 0, 00:03:48.737 "timeout_admin_us": 0, 00:03:48.737 "keep_alive_timeout_ms": 10000, 00:03:48.737 "arbitration_burst": 0, 00:03:48.737 "low_priority_weight": 0, 00:03:48.737 "medium_priority_weight": 0, 00:03:48.737 "high_priority_weight": 0, 00:03:48.737 "nvme_adminq_poll_period_us": 10000, 00:03:48.737 "nvme_ioq_poll_period_us": 0, 00:03:48.737 "io_queue_requests": 0, 00:03:48.737 "delay_cmd_submit": true, 00:03:48.737 "transport_retry_count": 4, 00:03:48.737 "bdev_retry_count": 3, 00:03:48.737 "transport_ack_timeout": 0, 00:03:48.737 "ctrlr_loss_timeout_sec": 0, 00:03:48.737 "reconnect_delay_sec": 0, 00:03:48.737 "fast_io_fail_timeout_sec": 0, 00:03:48.737 "disable_auto_failback": false, 00:03:48.737 "generate_uuids": false, 00:03:48.737 "transport_tos": 0, 00:03:48.737 "nvme_error_stat": false, 00:03:48.737 "rdma_srq_size": 0, 00:03:48.737 "io_path_stat": false, 00:03:48.737 "allow_accel_sequence": false, 00:03:48.737 "rdma_max_cq_size": 0, 00:03:48.737 "rdma_cm_event_timeout_ms": 0, 00:03:48.737 "dhchap_digests": [ 00:03:48.737 "sha256", 00:03:48.737 "sha384", 00:03:48.737 "sha512" 00:03:48.737 ], 00:03:48.737 "dhchap_dhgroups": [ 00:03:48.737 "null", 00:03:48.737 "ffdhe2048", 00:03:48.737 "ffdhe3072", 00:03:48.737 "ffdhe4096", 00:03:48.737 "ffdhe6144", 00:03:48.737 "ffdhe8192" 00:03:48.737 ] 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "bdev_nvme_set_hotplug", 00:03:48.737 "params": { 00:03:48.737 "period_us": 100000, 00:03:48.737 "enable": false 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "bdev_wait_for_examine" 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "scsi", 00:03:48.737 "config": null 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "scheduler", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "framework_set_scheduler", 00:03:48.737 "params": { 00:03:48.737 "name": "static" 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "vhost_scsi", 00:03:48.737 "config": [] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "vhost_blk", 00:03:48.737 "config": [] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "ublk", 00:03:48.737 "config": [] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "nbd", 00:03:48.737 "config": [] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "nvmf", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "nvmf_set_config", 00:03:48.737 "params": { 00:03:48.737 "discovery_filter": "match_any", 00:03:48.737 "admin_cmd_passthru": { 00:03:48.737 "identify_ctrlr": false 00:03:48.737 }, 00:03:48.737 "dhchap_digests": [ 00:03:48.737 "sha256", 00:03:48.737 "sha384", 00:03:48.737 "sha512" 00:03:48.737 ], 00:03:48.737 "dhchap_dhgroups": [ 00:03:48.737 "null", 00:03:48.737 "ffdhe2048", 00:03:48.737 "ffdhe3072", 00:03:48.737 "ffdhe4096", 00:03:48.737 "ffdhe6144", 00:03:48.737 "ffdhe8192" 00:03:48.737 ] 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "nvmf_set_max_subsystems", 00:03:48.737 "params": { 00:03:48.737 "max_subsystems": 1024 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "nvmf_set_crdt", 00:03:48.737 "params": { 00:03:48.737 "crdt1": 0, 00:03:48.737 "crdt2": 0, 00:03:48.737 "crdt3": 0 00:03:48.737 } 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "method": "nvmf_create_transport", 00:03:48.737 "params": { 00:03:48.737 "trtype": "TCP", 00:03:48.737 "max_queue_depth": 128, 00:03:48.737 "max_io_qpairs_per_ctrlr": 127, 00:03:48.737 "in_capsule_data_size": 4096, 00:03:48.737 "max_io_size": 131072, 00:03:48.737 "io_unit_size": 131072, 00:03:48.737 "max_aq_depth": 128, 00:03:48.737 "num_shared_buffers": 511, 00:03:48.737 "buf_cache_size": 4294967295, 00:03:48.737 "dif_insert_or_strip": false, 00:03:48.737 "zcopy": false, 00:03:48.737 "c2h_success": true, 00:03:48.737 "sock_priority": 0, 00:03:48.737 "abort_timeout_sec": 1, 00:03:48.737 "ack_timeout": 0, 00:03:48.737 "data_wr_pool_size": 0 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 }, 00:03:48.737 { 00:03:48.737 "subsystem": "iscsi", 00:03:48.737 "config": [ 00:03:48.737 { 00:03:48.737 "method": "iscsi_set_options", 00:03:48.737 "params": { 00:03:48.737 "node_base": "iqn.2016-06.io.spdk", 00:03:48.737 "max_sessions": 128, 00:03:48.737 "max_connections_per_session": 2, 00:03:48.737 "max_queue_depth": 64, 00:03:48.737 "default_time2wait": 2, 00:03:48.737 "default_time2retain": 20, 00:03:48.737 "first_burst_length": 8192, 00:03:48.737 "immediate_data": true, 00:03:48.737 "allow_duplicated_isid": false, 00:03:48.737 "error_recovery_level": 0, 00:03:48.737 "nop_timeout": 60, 00:03:48.737 "nop_in_interval": 30, 00:03:48.737 "disable_chap": false, 00:03:48.737 "require_chap": false, 00:03:48.737 "mutual_chap": false, 00:03:48.737 "chap_group": 0, 00:03:48.737 "max_large_datain_per_connection": 64, 00:03:48.737 "max_r2t_per_connection": 4, 00:03:48.737 "pdu_pool_size": 36864, 00:03:48.737 "immediate_data_pool_size": 16384, 00:03:48.737 "data_out_pool_size": 2048 00:03:48.737 } 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 } 00:03:48.737 ] 00:03:48.737 } 00:03:48.737 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:48.737 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1967390 00:03:48.737 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1967390 ']' 00:03:48.737 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1967390 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967390 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967390' 00:03:48.738 killing process with pid 1967390 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1967390 00:03:48.738 15:12:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1967390 00:03:48.997 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1967623 00:03:48.997 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.997 15:12:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1967623 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1967623 ']' 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1967623 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967623 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967623' 00:03:54.272 killing process with pid 1967623 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1967623 00:03:54.272 15:12:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1967623 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:54.532 00:03:54.532 real 0m6.292s 00:03:54.532 user 0m5.995s 00:03:54.532 sys 0m0.606s 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.532 ************************************ 00:03:54.532 END TEST skip_rpc_with_json 00:03:54.532 ************************************ 00:03:54.532 15:12:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:54.532 15:12:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.532 15:12:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.532 15:12:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.532 ************************************ 00:03:54.532 START TEST skip_rpc_with_delay 00:03:54.532 ************************************ 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.532 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.533 [2024-11-20 15:12:58.381485] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:54.533 00:03:54.533 real 0m0.070s 00:03:54.533 user 0m0.043s 00:03:54.533 sys 0m0.026s 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.533 15:12:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:54.533 ************************************ 00:03:54.533 END TEST skip_rpc_with_delay 00:03:54.533 ************************************ 00:03:54.533 15:12:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:54.533 15:12:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:54.533 15:12:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:54.533 15:12:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.533 15:12:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.533 15:12:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 ************************************ 00:03:54.792 START TEST exit_on_failed_rpc_init 00:03:54.792 ************************************ 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1968600 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1968600 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1968600 ']' 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.792 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.792 [2024-11-20 15:12:58.517391] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:54.792 [2024-11-20 15:12:58.517432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968600 ] 00:03:54.792 [2024-11-20 15:12:58.593077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.792 [2024-11-20 15:12:58.636460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.052 15:12:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.052 [2024-11-20 15:12:58.909808] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:55.052 [2024-11-20 15:12:58.909856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968606 ] 00:03:55.311 [2024-11-20 15:12:58.983611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.311 [2024-11-20 15:12:59.024686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.311 [2024-11-20 15:12:59.024740] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:55.311 [2024-11-20 15:12:59.024749] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:55.311 [2024-11-20 15:12:59.024757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1968600 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1968600 ']' 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1968600 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968600 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968600' 00:03:55.311 killing process with pid 1968600 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1968600 00:03:55.311 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1968600 00:03:55.570 00:03:55.570 real 0m0.956s 00:03:55.570 user 0m1.036s 00:03:55.570 sys 0m0.378s 00:03:55.570 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.570 15:12:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.570 ************************************ 00:03:55.570 END TEST exit_on_failed_rpc_init 00:03:55.570 ************************************ 00:03:55.570 15:12:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.570 00:03:55.570 real 0m13.140s 00:03:55.570 user 0m12.405s 00:03:55.570 sys 0m1.568s 00:03:55.570 15:12:59 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.570 15:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.570 ************************************ 00:03:55.570 END TEST skip_rpc 00:03:55.570 ************************************ 00:03:55.829 15:12:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:55.829 15:12:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.829 15:12:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.829 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:03:55.829 ************************************ 00:03:55.829 START TEST rpc_client 00:03:55.829 ************************************ 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:55.829 * Looking for test storage... 00:03:55.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.829 15:12:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.829 --rc genhtml_branch_coverage=1 00:03:55.829 --rc genhtml_function_coverage=1 00:03:55.829 --rc genhtml_legend=1 00:03:55.829 --rc geninfo_all_blocks=1 00:03:55.829 --rc geninfo_unexecuted_blocks=1 00:03:55.829 00:03:55.829 ' 00:03:55.829 15:12:59 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.829 --rc genhtml_branch_coverage=1 00:03:55.829 --rc genhtml_function_coverage=1 00:03:55.829 --rc genhtml_legend=1 00:03:55.829 --rc geninfo_all_blocks=1 00:03:55.829 --rc geninfo_unexecuted_blocks=1 00:03:55.830 00:03:55.830 ' 00:03:55.830 15:12:59 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.830 --rc genhtml_branch_coverage=1 00:03:55.830 --rc genhtml_function_coverage=1 00:03:55.830 --rc genhtml_legend=1 00:03:55.830 --rc geninfo_all_blocks=1 00:03:55.830 --rc geninfo_unexecuted_blocks=1 00:03:55.830 00:03:55.830 ' 00:03:55.830 15:12:59 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.830 --rc genhtml_branch_coverage=1 00:03:55.830 --rc genhtml_function_coverage=1 00:03:55.830 --rc genhtml_legend=1 00:03:55.830 --rc geninfo_all_blocks=1 00:03:55.830 --rc geninfo_unexecuted_blocks=1 00:03:55.830 00:03:55.830 ' 00:03:55.830 15:12:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:55.830 OK 00:03:55.830 15:12:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:55.830 00:03:55.830 real 0m0.196s 00:03:55.830 user 0m0.115s 00:03:55.830 sys 0m0.094s 00:03:55.830 15:12:59 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.830 15:12:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 END TEST rpc_client 00:03:55.830 ************************************ 00:03:56.089 15:12:59 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.089 15:12:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.089 15:12:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.089 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.089 ************************************ 00:03:56.089 START TEST json_config 00:03:56.089 ************************************ 00:03:56.089 15:12:59 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.089 15:12:59 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.089 15:12:59 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.089 15:12:59 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.089 15:12:59 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.089 15:12:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.089 15:12:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.089 15:12:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.089 15:12:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.089 15:12:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.089 15:12:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:56.089 15:12:59 json_config -- scripts/common.sh@345 -- # : 1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.089 15:12:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.089 15:12:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@353 -- # local d=1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.089 15:12:59 json_config -- scripts/common.sh@355 -- # echo 1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.089 15:12:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@353 -- # local d=2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.089 15:12:59 json_config -- scripts/common.sh@355 -- # echo 2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.089 15:12:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.090 15:12:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.090 15:12:59 json_config -- scripts/common.sh@368 -- # return 0 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.090 --rc genhtml_branch_coverage=1 00:03:56.090 --rc genhtml_function_coverage=1 00:03:56.090 --rc genhtml_legend=1 00:03:56.090 --rc geninfo_all_blocks=1 00:03:56.090 --rc geninfo_unexecuted_blocks=1 00:03:56.090 00:03:56.090 ' 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.090 --rc genhtml_branch_coverage=1 00:03:56.090 --rc genhtml_function_coverage=1 00:03:56.090 --rc genhtml_legend=1 00:03:56.090 --rc geninfo_all_blocks=1 00:03:56.090 --rc geninfo_unexecuted_blocks=1 00:03:56.090 00:03:56.090 ' 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.090 --rc genhtml_branch_coverage=1 00:03:56.090 --rc genhtml_function_coverage=1 00:03:56.090 --rc genhtml_legend=1 00:03:56.090 --rc geninfo_all_blocks=1 00:03:56.090 --rc geninfo_unexecuted_blocks=1 00:03:56.090 00:03:56.090 ' 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.090 --rc genhtml_branch_coverage=1 00:03:56.090 --rc genhtml_function_coverage=1 00:03:56.090 --rc genhtml_legend=1 00:03:56.090 --rc geninfo_all_blocks=1 00:03:56.090 --rc geninfo_unexecuted_blocks=1 00:03:56.090 00:03:56.090 ' 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.090 15:12:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:56.090 15:12:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.090 15:12:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.090 15:12:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.090 15:12:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.090 15:12:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.090 15:12:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.090 15:12:59 json_config -- paths/export.sh@5 -- # export PATH 00:03:56.090 15:12:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@51 -- # : 0 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:56.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:56.090 15:12:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:56.090 INFO: JSON configuration test init 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.090 15:12:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.090 15:12:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:56.090 15:12:59 json_config -- json_config/common.sh@9 -- # local app=target 00:03:56.090 15:12:59 json_config -- json_config/common.sh@10 -- # shift 00:03:56.090 15:12:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.090 15:12:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.090 15:12:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.090 15:12:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.090 15:12:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.090 15:12:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1968958 00:03:56.349 15:12:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.349 Waiting for target to run... 00:03:56.349 15:12:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1968958 /var/tmp/spdk_tgt.sock 00:03:56.349 15:12:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:56.349 15:12:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 1968958 ']' 00:03:56.349 15:12:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.349 15:12:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.349 15:12:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.350 15:12:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.350 15:12:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.350 [2024-11-20 15:13:00.048608] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:03:56.350 [2024-11-20 15:13:00.048662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968958 ] 00:03:56.608 [2024-11-20 15:13:00.500138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.866 [2024-11-20 15:13:00.554251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:57.124 15:13:00 json_config -- json_config/common.sh@26 -- # echo '' 00:03:57.124 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.124 15:13:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:57.124 15:13:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:57.124 15:13:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:00.411 15:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@54 -- # sort 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.411 15:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:00.411 15:13:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.411 15:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.670 MallocForNvmf0 00:04:00.670 15:13:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.670 15:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.928 MallocForNvmf1 00:04:00.928 15:13:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.928 15:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.187 [2024-11-20 15:13:04.874999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.187 15:13:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.187 15:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.446 15:13:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.446 15:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.446 15:13:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.446 15:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.705 15:13:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.705 15:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.964 [2024-11-20 15:13:05.657485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.964 15:13:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:01.964 15:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.964 15:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 15:13:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:01.964 15:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.964 15:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 15:13:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:01.964 15:13:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.964 15:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:02.223 MallocBdevForConfigChangeCheck 00:04:02.223 15:13:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:02.223 15:13:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.223 15:13:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.223 15:13:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:02.223 15:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.482 15:13:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:02.482 INFO: shutting down applications... 00:04:02.482 15:13:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:02.482 15:13:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:02.482 15:13:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:02.482 15:13:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:04.386 Calling clear_iscsi_subsystem 00:04:04.386 Calling clear_nvmf_subsystem 00:04:04.386 Calling clear_nbd_subsystem 00:04:04.386 Calling clear_ublk_subsystem 00:04:04.386 Calling clear_vhost_blk_subsystem 00:04:04.386 Calling clear_vhost_scsi_subsystem 00:04:04.386 Calling clear_bdev_subsystem 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:04.386 15:13:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:04.386 15:13:08 json_config -- json_config/json_config.sh@352 -- # break 00:04:04.386 15:13:08 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:04.386 15:13:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:04.386 15:13:08 json_config -- json_config/common.sh@31 -- # local app=target 00:04:04.386 15:13:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:04.386 15:13:08 json_config -- json_config/common.sh@35 -- # [[ -n 1968958 ]] 00:04:04.386 15:13:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1968958 00:04:04.386 15:13:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:04.386 15:13:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.386 15:13:08 json_config -- json_config/common.sh@41 -- # kill -0 1968958 00:04:04.386 15:13:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.954 15:13:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.954 15:13:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.954 15:13:08 json_config -- json_config/common.sh@41 -- # kill -0 1968958 00:04:04.954 15:13:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.954 15:13:08 json_config -- json_config/common.sh@43 -- # break 00:04:04.954 15:13:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.954 15:13:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.954 SPDK target shutdown done 00:04:04.954 15:13:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:04.954 INFO: relaunching applications... 00:04:04.954 15:13:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.954 15:13:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.954 15:13:08 json_config -- json_config/common.sh@10 -- # shift 00:04:04.954 15:13:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.954 15:13:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.954 15:13:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.954 15:13:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.954 15:13:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.954 15:13:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1970489 00:04:04.954 15:13:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.954 Waiting for target to run... 00:04:04.954 15:13:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1970489 /var/tmp/spdk_tgt.sock 00:04:04.954 15:13:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 1970489 ']' 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.954 15:13:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.954 [2024-11-20 15:13:08.835596] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:04.954 [2024-11-20 15:13:08.835655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970489 ] 00:04:05.213 [2024-11-20 15:13:09.119318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.472 [2024-11-20 15:13:09.155436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.761 [2024-11-20 15:13:12.188839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.761 [2024-11-20 15:13:12.221221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.761 15:13:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.761 15:13:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:08.761 15:13:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.761 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:08.761 INFO: Checking if target configuration is the same... 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:08.761 15:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.761 + '[' 2 -ne 2 ']' 00:04:08.761 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.761 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.761 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.761 +++ basename /dev/fd/62 00:04:08.761 ++ mktemp /tmp/62.XXX 00:04:08.761 + tmp_file_1=/tmp/62.pnw 00:04:08.761 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.761 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.761 + tmp_file_2=/tmp/spdk_tgt_config.json.MRJ 00:04:08.761 + ret=0 00:04:08.761 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.761 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.761 + diff -u /tmp/62.pnw /tmp/spdk_tgt_config.json.MRJ 00:04:08.761 + echo 'INFO: JSON config files are the same' 00:04:08.761 INFO: JSON config files are the same 00:04:08.761 + rm /tmp/62.pnw /tmp/spdk_tgt_config.json.MRJ 00:04:08.761 + exit 0 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.761 INFO: changing configuration and checking if this can be detected... 00:04:08.761 15:13:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.761 15:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:09.020 15:13:12 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.020 15:13:12 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:09.020 15:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.020 + '[' 2 -ne 2 ']' 00:04:09.020 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:09.020 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:09.020 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.020 +++ basename /dev/fd/62 00:04:09.020 ++ mktemp /tmp/62.XXX 00:04:09.020 + tmp_file_1=/tmp/62.KNh 00:04:09.020 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.020 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:09.020 + tmp_file_2=/tmp/spdk_tgt_config.json.mxq 00:04:09.020 + ret=0 00:04:09.020 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.589 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.589 + diff -u /tmp/62.KNh /tmp/spdk_tgt_config.json.mxq 00:04:09.589 + ret=1 00:04:09.589 + echo '=== Start of file: /tmp/62.KNh ===' 00:04:09.589 + cat /tmp/62.KNh 00:04:09.589 + echo '=== End of file: /tmp/62.KNh ===' 00:04:09.589 + echo '' 00:04:09.589 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mxq ===' 00:04:09.589 + cat /tmp/spdk_tgt_config.json.mxq 00:04:09.589 + echo '=== End of file: /tmp/spdk_tgt_config.json.mxq ===' 00:04:09.589 + echo '' 00:04:09.589 + rm /tmp/62.KNh /tmp/spdk_tgt_config.json.mxq 00:04:09.589 + exit 1 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:09.589 INFO: configuration change detected. 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@324 -- # [[ -n 1970489 ]] 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.589 15:13:13 json_config -- json_config/json_config.sh@330 -- # killprocess 1970489 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@954 -- # '[' -z 1970489 ']' 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@958 -- # kill -0 1970489 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@959 -- # uname 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970489 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970489' 00:04:09.589 killing process with pid 1970489 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@973 -- # kill 1970489 00:04:09.589 15:13:13 json_config -- common/autotest_common.sh@978 -- # wait 1970489 00:04:11.494 15:13:14 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.494 15:13:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:11.494 15:13:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.494 15:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.494 15:13:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:11.494 15:13:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:11.494 INFO: Success 00:04:11.494 00:04:11.494 real 0m15.118s 00:04:11.494 user 0m15.616s 00:04:11.494 sys 0m2.600s 00:04:11.494 15:13:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.494 15:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.494 ************************************ 00:04:11.494 END TEST json_config 00:04:11.494 ************************************ 00:04:11.494 15:13:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.494 15:13:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.494 15:13:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.494 15:13:14 -- common/autotest_common.sh@10 -- # set +x 00:04:11.494 ************************************ 00:04:11.494 START TEST json_config_extra_key 00:04:11.494 ************************************ 00:04:11.494 15:13:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.494 --rc genhtml_branch_coverage=1 00:04:11.494 --rc genhtml_function_coverage=1 00:04:11.494 --rc genhtml_legend=1 00:04:11.494 --rc geninfo_all_blocks=1 00:04:11.494 --rc geninfo_unexecuted_blocks=1 00:04:11.494 00:04:11.494 ' 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.494 --rc genhtml_branch_coverage=1 00:04:11.494 --rc genhtml_function_coverage=1 00:04:11.494 --rc genhtml_legend=1 00:04:11.494 --rc geninfo_all_blocks=1 00:04:11.494 --rc geninfo_unexecuted_blocks=1 00:04:11.494 00:04:11.494 ' 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.494 --rc genhtml_branch_coverage=1 00:04:11.494 --rc genhtml_function_coverage=1 00:04:11.494 --rc genhtml_legend=1 00:04:11.494 --rc geninfo_all_blocks=1 00:04:11.494 --rc geninfo_unexecuted_blocks=1 00:04:11.494 00:04:11.494 ' 00:04:11.494 15:13:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.494 --rc genhtml_branch_coverage=1 00:04:11.494 --rc genhtml_function_coverage=1 00:04:11.494 --rc genhtml_legend=1 00:04:11.494 --rc geninfo_all_blocks=1 00:04:11.494 --rc geninfo_unexecuted_blocks=1 00:04:11.494 00:04:11.494 ' 00:04:11.494 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.494 15:13:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.494 15:13:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.495 15:13:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.495 15:13:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.495 15:13:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.495 15:13:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:11.495 15:13:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.495 15:13:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:11.495 INFO: launching applications... 00:04:11.495 15:13:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1971743 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.495 Waiting for target to run... 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1971743 /var/tmp/spdk_tgt.sock 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1971743 ']' 00:04:11.495 15:13:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.495 15:13:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.495 [2024-11-20 15:13:15.218503] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:11.495 [2024-11-20 15:13:15.218553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971743 ] 00:04:12.063 [2024-11-20 15:13:15.668781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.063 [2024-11-20 15:13:15.725407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.322 15:13:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.322 15:13:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:12.322 00:04:12.322 15:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:12.322 INFO: shutting down applications... 00:04:12.322 15:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1971743 ]] 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1971743 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1971743 00:04:12.322 15:13:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1971743 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.888 15:13:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.888 SPDK target shutdown done 00:04:12.888 15:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:12.888 Success 00:04:12.888 00:04:12.888 real 0m1.591s 00:04:12.888 user 0m1.240s 00:04:12.888 sys 0m0.562s 00:04:12.888 15:13:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.888 15:13:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.888 ************************************ 00:04:12.888 END TEST json_config_extra_key 00:04:12.888 ************************************ 00:04:12.888 15:13:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.888 15:13:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.888 15:13:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.888 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:04:12.889 ************************************ 00:04:12.889 START TEST alias_rpc 00:04:12.889 ************************************ 00:04:12.889 15:13:16 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.889 * Looking for test storage... 00:04:12.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:12.889 15:13:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.889 15:13:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.889 15:13:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.147 15:13:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.147 --rc genhtml_branch_coverage=1 00:04:13.147 --rc genhtml_function_coverage=1 00:04:13.147 --rc genhtml_legend=1 00:04:13.147 --rc geninfo_all_blocks=1 00:04:13.147 --rc geninfo_unexecuted_blocks=1 00:04:13.147 00:04:13.147 ' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.147 --rc genhtml_branch_coverage=1 00:04:13.147 --rc genhtml_function_coverage=1 00:04:13.147 --rc genhtml_legend=1 00:04:13.147 --rc geninfo_all_blocks=1 00:04:13.147 --rc geninfo_unexecuted_blocks=1 00:04:13.147 00:04:13.147 ' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.147 --rc genhtml_branch_coverage=1 00:04:13.147 --rc genhtml_function_coverage=1 00:04:13.147 --rc genhtml_legend=1 00:04:13.147 --rc geninfo_all_blocks=1 00:04:13.147 --rc geninfo_unexecuted_blocks=1 00:04:13.147 00:04:13.147 ' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.147 --rc genhtml_branch_coverage=1 00:04:13.147 --rc genhtml_function_coverage=1 00:04:13.147 --rc genhtml_legend=1 00:04:13.147 --rc geninfo_all_blocks=1 00:04:13.147 --rc geninfo_unexecuted_blocks=1 00:04:13.147 00:04:13.147 ' 00:04:13.147 15:13:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:13.147 15:13:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1972042 00:04:13.147 15:13:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1972042 00:04:13.147 15:13:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1972042 ']' 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.147 15:13:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.147 [2024-11-20 15:13:16.873201] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:13.148 [2024-11-20 15:13:16.873248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972042 ] 00:04:13.148 [2024-11-20 15:13:16.948029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.148 [2024-11-20 15:13:16.990636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.406 15:13:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.406 15:13:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.406 15:13:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:13.666 15:13:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1972042 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1972042 ']' 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1972042 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972042 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972042' 00:04:13.666 killing process with pid 1972042 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 1972042 00:04:13.666 15:13:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 1972042 00:04:13.926 00:04:13.926 real 0m1.132s 00:04:13.926 user 0m1.143s 00:04:13.926 sys 0m0.422s 00:04:13.926 15:13:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.926 15:13:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.926 ************************************ 00:04:13.926 END TEST alias_rpc 00:04:13.926 ************************************ 00:04:13.926 15:13:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:13.926 15:13:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:13.926 15:13:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.926 15:13:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.926 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.185 ************************************ 00:04:14.185 START TEST spdkcli_tcp 00:04:14.185 ************************************ 00:04:14.185 15:13:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.185 * Looking for test storage... 00:04:14.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:14.185 15:13:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.185 15:13:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.185 15:13:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.185 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.185 15:13:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:14.185 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.185 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.185 --rc genhtml_branch_coverage=1 00:04:14.185 --rc genhtml_function_coverage=1 00:04:14.185 --rc genhtml_legend=1 00:04:14.185 --rc geninfo_all_blocks=1 00:04:14.185 --rc geninfo_unexecuted_blocks=1 00:04:14.185 00:04:14.185 ' 00:04:14.185 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.185 --rc genhtml_branch_coverage=1 00:04:14.185 --rc genhtml_function_coverage=1 00:04:14.185 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1972329 00:04:14.186 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1972329 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1972329 ']' 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.186 15:13:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.186 [2024-11-20 15:13:18.073975] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:14.186 [2024-11-20 15:13:18.074022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972329 ] 00:04:14.446 [2024-11-20 15:13:18.150821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.446 [2024-11-20 15:13:18.192032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.446 [2024-11-20 15:13:18.192033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.706 15:13:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.706 15:13:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:14.706 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1972342 00:04:14.706 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.706 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.706 [ 00:04:14.706 "bdev_malloc_delete", 00:04:14.706 "bdev_malloc_create", 00:04:14.706 "bdev_null_resize", 00:04:14.706 "bdev_null_delete", 00:04:14.706 "bdev_null_create", 00:04:14.706 "bdev_nvme_cuse_unregister", 00:04:14.706 "bdev_nvme_cuse_register", 00:04:14.706 "bdev_opal_new_user", 00:04:14.706 "bdev_opal_set_lock_state", 00:04:14.706 "bdev_opal_delete", 00:04:14.706 "bdev_opal_get_info", 00:04:14.706 "bdev_opal_create", 00:04:14.706 "bdev_nvme_opal_revert", 00:04:14.706 "bdev_nvme_opal_init", 00:04:14.706 "bdev_nvme_send_cmd", 00:04:14.706 "bdev_nvme_set_keys", 00:04:14.706 "bdev_nvme_get_path_iostat", 00:04:14.706 "bdev_nvme_get_mdns_discovery_info", 00:04:14.706 "bdev_nvme_stop_mdns_discovery", 00:04:14.706 "bdev_nvme_start_mdns_discovery", 00:04:14.706 "bdev_nvme_set_multipath_policy", 00:04:14.706 "bdev_nvme_set_preferred_path", 00:04:14.706 "bdev_nvme_get_io_paths", 00:04:14.706 "bdev_nvme_remove_error_injection", 00:04:14.706 "bdev_nvme_add_error_injection", 00:04:14.706 "bdev_nvme_get_discovery_info", 00:04:14.706 "bdev_nvme_stop_discovery", 00:04:14.706 "bdev_nvme_start_discovery", 00:04:14.706 "bdev_nvme_get_controller_health_info", 00:04:14.706 "bdev_nvme_disable_controller", 00:04:14.706 "bdev_nvme_enable_controller", 00:04:14.706 "bdev_nvme_reset_controller", 00:04:14.706 "bdev_nvme_get_transport_statistics", 00:04:14.706 "bdev_nvme_apply_firmware", 00:04:14.706 "bdev_nvme_detach_controller", 00:04:14.706 "bdev_nvme_get_controllers", 00:04:14.706 "bdev_nvme_attach_controller", 00:04:14.706 "bdev_nvme_set_hotplug", 00:04:14.706 "bdev_nvme_set_options", 00:04:14.706 "bdev_passthru_delete", 00:04:14.706 "bdev_passthru_create", 00:04:14.706 "bdev_lvol_set_parent_bdev", 00:04:14.706 "bdev_lvol_set_parent", 00:04:14.706 "bdev_lvol_check_shallow_copy", 00:04:14.706 "bdev_lvol_start_shallow_copy", 00:04:14.706 "bdev_lvol_grow_lvstore", 00:04:14.706 "bdev_lvol_get_lvols", 00:04:14.706 "bdev_lvol_get_lvstores", 00:04:14.706 "bdev_lvol_delete", 00:04:14.706 "bdev_lvol_set_read_only", 00:04:14.706 "bdev_lvol_resize", 00:04:14.706 "bdev_lvol_decouple_parent", 00:04:14.706 "bdev_lvol_inflate", 00:04:14.706 "bdev_lvol_rename", 00:04:14.706 "bdev_lvol_clone_bdev", 00:04:14.706 "bdev_lvol_clone", 00:04:14.706 "bdev_lvol_snapshot", 00:04:14.706 "bdev_lvol_create", 00:04:14.706 "bdev_lvol_delete_lvstore", 00:04:14.706 "bdev_lvol_rename_lvstore", 00:04:14.706 "bdev_lvol_create_lvstore", 00:04:14.706 "bdev_raid_set_options", 00:04:14.706 "bdev_raid_remove_base_bdev", 00:04:14.706 "bdev_raid_add_base_bdev", 00:04:14.706 "bdev_raid_delete", 00:04:14.706 "bdev_raid_create", 00:04:14.706 "bdev_raid_get_bdevs", 00:04:14.706 "bdev_error_inject_error", 00:04:14.706 "bdev_error_delete", 00:04:14.706 "bdev_error_create", 00:04:14.706 "bdev_split_delete", 00:04:14.706 "bdev_split_create", 00:04:14.706 "bdev_delay_delete", 00:04:14.706 "bdev_delay_create", 00:04:14.706 "bdev_delay_update_latency", 00:04:14.706 "bdev_zone_block_delete", 00:04:14.706 "bdev_zone_block_create", 00:04:14.706 "blobfs_create", 00:04:14.706 "blobfs_detect", 00:04:14.706 "blobfs_set_cache_size", 00:04:14.706 "bdev_aio_delete", 00:04:14.706 "bdev_aio_rescan", 00:04:14.706 "bdev_aio_create", 00:04:14.706 "bdev_ftl_set_property", 00:04:14.706 "bdev_ftl_get_properties", 00:04:14.706 "bdev_ftl_get_stats", 00:04:14.706 "bdev_ftl_unmap", 00:04:14.706 "bdev_ftl_unload", 00:04:14.706 "bdev_ftl_delete", 00:04:14.706 "bdev_ftl_load", 00:04:14.706 "bdev_ftl_create", 00:04:14.706 "bdev_virtio_attach_controller", 00:04:14.706 "bdev_virtio_scsi_get_devices", 00:04:14.706 "bdev_virtio_detach_controller", 00:04:14.706 "bdev_virtio_blk_set_hotplug", 00:04:14.706 "bdev_iscsi_delete", 00:04:14.706 "bdev_iscsi_create", 00:04:14.706 "bdev_iscsi_set_options", 00:04:14.707 "accel_error_inject_error", 00:04:14.707 "ioat_scan_accel_module", 00:04:14.707 "dsa_scan_accel_module", 00:04:14.707 "iaa_scan_accel_module", 00:04:14.707 "vfu_virtio_create_fs_endpoint", 00:04:14.707 "vfu_virtio_create_scsi_endpoint", 00:04:14.707 "vfu_virtio_scsi_remove_target", 00:04:14.707 "vfu_virtio_scsi_add_target", 00:04:14.707 "vfu_virtio_create_blk_endpoint", 00:04:14.707 "vfu_virtio_delete_endpoint", 00:04:14.707 "keyring_file_remove_key", 00:04:14.707 "keyring_file_add_key", 00:04:14.707 "keyring_linux_set_options", 00:04:14.707 "fsdev_aio_delete", 00:04:14.707 "fsdev_aio_create", 00:04:14.707 "iscsi_get_histogram", 00:04:14.707 "iscsi_enable_histogram", 00:04:14.707 "iscsi_set_options", 00:04:14.707 "iscsi_get_auth_groups", 00:04:14.707 "iscsi_auth_group_remove_secret", 00:04:14.707 "iscsi_auth_group_add_secret", 00:04:14.707 "iscsi_delete_auth_group", 00:04:14.707 "iscsi_create_auth_group", 00:04:14.707 "iscsi_set_discovery_auth", 00:04:14.707 "iscsi_get_options", 00:04:14.707 "iscsi_target_node_request_logout", 00:04:14.707 "iscsi_target_node_set_redirect", 00:04:14.707 "iscsi_target_node_set_auth", 00:04:14.707 "iscsi_target_node_add_lun", 00:04:14.707 "iscsi_get_stats", 00:04:14.707 "iscsi_get_connections", 00:04:14.707 "iscsi_portal_group_set_auth", 00:04:14.707 "iscsi_start_portal_group", 00:04:14.707 "iscsi_delete_portal_group", 00:04:14.707 "iscsi_create_portal_group", 00:04:14.707 "iscsi_get_portal_groups", 00:04:14.707 "iscsi_delete_target_node", 00:04:14.707 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.707 "iscsi_target_node_add_pg_ig_maps", 00:04:14.707 "iscsi_create_target_node", 00:04:14.707 "iscsi_get_target_nodes", 00:04:14.707 "iscsi_delete_initiator_group", 00:04:14.707 "iscsi_initiator_group_remove_initiators", 00:04:14.707 "iscsi_initiator_group_add_initiators", 00:04:14.707 "iscsi_create_initiator_group", 00:04:14.707 "iscsi_get_initiator_groups", 00:04:14.707 "nvmf_set_crdt", 00:04:14.707 "nvmf_set_config", 00:04:14.707 "nvmf_set_max_subsystems", 00:04:14.707 "nvmf_stop_mdns_prr", 00:04:14.707 "nvmf_publish_mdns_prr", 00:04:14.707 "nvmf_subsystem_get_listeners", 00:04:14.707 "nvmf_subsystem_get_qpairs", 00:04:14.707 "nvmf_subsystem_get_controllers", 00:04:14.707 "nvmf_get_stats", 00:04:14.707 "nvmf_get_transports", 00:04:14.707 "nvmf_create_transport", 00:04:14.707 "nvmf_get_targets", 00:04:14.707 "nvmf_delete_target", 00:04:14.707 "nvmf_create_target", 00:04:14.707 "nvmf_subsystem_allow_any_host", 00:04:14.707 "nvmf_subsystem_set_keys", 00:04:14.707 "nvmf_subsystem_remove_host", 00:04:14.707 "nvmf_subsystem_add_host", 00:04:14.707 "nvmf_ns_remove_host", 00:04:14.707 "nvmf_ns_add_host", 00:04:14.707 "nvmf_subsystem_remove_ns", 00:04:14.707 "nvmf_subsystem_set_ns_ana_group", 00:04:14.707 "nvmf_subsystem_add_ns", 00:04:14.707 "nvmf_subsystem_listener_set_ana_state", 00:04:14.707 "nvmf_discovery_get_referrals", 00:04:14.707 "nvmf_discovery_remove_referral", 00:04:14.707 "nvmf_discovery_add_referral", 00:04:14.707 "nvmf_subsystem_remove_listener", 00:04:14.707 "nvmf_subsystem_add_listener", 00:04:14.707 "nvmf_delete_subsystem", 00:04:14.707 "nvmf_create_subsystem", 00:04:14.707 "nvmf_get_subsystems", 00:04:14.707 "env_dpdk_get_mem_stats", 00:04:14.707 "nbd_get_disks", 00:04:14.707 "nbd_stop_disk", 00:04:14.707 "nbd_start_disk", 00:04:14.707 "ublk_recover_disk", 00:04:14.707 "ublk_get_disks", 00:04:14.707 "ublk_stop_disk", 00:04:14.707 "ublk_start_disk", 00:04:14.707 "ublk_destroy_target", 00:04:14.707 "ublk_create_target", 00:04:14.707 "virtio_blk_create_transport", 00:04:14.707 "virtio_blk_get_transports", 00:04:14.707 "vhost_controller_set_coalescing", 00:04:14.707 "vhost_get_controllers", 00:04:14.707 "vhost_delete_controller", 00:04:14.707 "vhost_create_blk_controller", 00:04:14.707 "vhost_scsi_controller_remove_target", 00:04:14.707 "vhost_scsi_controller_add_target", 00:04:14.707 "vhost_start_scsi_controller", 00:04:14.707 "vhost_create_scsi_controller", 00:04:14.707 "thread_set_cpumask", 00:04:14.707 "scheduler_set_options", 00:04:14.707 "framework_get_governor", 00:04:14.707 "framework_get_scheduler", 00:04:14.707 "framework_set_scheduler", 00:04:14.707 "framework_get_reactors", 00:04:14.707 "thread_get_io_channels", 00:04:14.707 "thread_get_pollers", 00:04:14.707 "thread_get_stats", 00:04:14.707 "framework_monitor_context_switch", 00:04:14.707 "spdk_kill_instance", 00:04:14.707 "log_enable_timestamps", 00:04:14.707 "log_get_flags", 00:04:14.707 "log_clear_flag", 00:04:14.707 "log_set_flag", 00:04:14.707 "log_get_level", 00:04:14.707 "log_set_level", 00:04:14.707 "log_get_print_level", 00:04:14.707 "log_set_print_level", 00:04:14.707 "framework_enable_cpumask_locks", 00:04:14.707 "framework_disable_cpumask_locks", 00:04:14.707 "framework_wait_init", 00:04:14.707 "framework_start_init", 00:04:14.707 "scsi_get_devices", 00:04:14.707 "bdev_get_histogram", 00:04:14.707 "bdev_enable_histogram", 00:04:14.707 "bdev_set_qos_limit", 00:04:14.707 "bdev_set_qd_sampling_period", 00:04:14.707 "bdev_get_bdevs", 00:04:14.707 "bdev_reset_iostat", 00:04:14.707 "bdev_get_iostat", 00:04:14.707 "bdev_examine", 00:04:14.707 "bdev_wait_for_examine", 00:04:14.707 "bdev_set_options", 00:04:14.707 "accel_get_stats", 00:04:14.707 "accel_set_options", 00:04:14.707 "accel_set_driver", 00:04:14.707 "accel_crypto_key_destroy", 00:04:14.707 "accel_crypto_keys_get", 00:04:14.707 "accel_crypto_key_create", 00:04:14.707 "accel_assign_opc", 00:04:14.707 "accel_get_module_info", 00:04:14.707 "accel_get_opc_assignments", 00:04:14.707 "vmd_rescan", 00:04:14.707 "vmd_remove_device", 00:04:14.707 "vmd_enable", 00:04:14.707 "sock_get_default_impl", 00:04:14.707 "sock_set_default_impl", 00:04:14.707 "sock_impl_set_options", 00:04:14.707 "sock_impl_get_options", 00:04:14.707 "iobuf_get_stats", 00:04:14.707 "iobuf_set_options", 00:04:14.707 "keyring_get_keys", 00:04:14.707 "vfu_tgt_set_base_path", 00:04:14.707 "framework_get_pci_devices", 00:04:14.707 "framework_get_config", 00:04:14.707 "framework_get_subsystems", 00:04:14.707 "fsdev_set_opts", 00:04:14.707 "fsdev_get_opts", 00:04:14.707 "trace_get_info", 00:04:14.707 "trace_get_tpoint_group_mask", 00:04:14.707 "trace_disable_tpoint_group", 00:04:14.707 "trace_enable_tpoint_group", 00:04:14.707 "trace_clear_tpoint_mask", 00:04:14.707 "trace_set_tpoint_mask", 00:04:14.707 "notify_get_notifications", 00:04:14.707 "notify_get_types", 00:04:14.707 "spdk_get_version", 00:04:14.707 "rpc_get_methods" 00:04:14.707 ] 00:04:14.707 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.707 15:13:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.707 15:13:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.966 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.966 15:13:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1972329 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1972329 ']' 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1972329 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972329 00:04:14.966 15:13:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.967 15:13:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.967 15:13:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972329' 00:04:14.967 killing process with pid 1972329 00:04:14.967 15:13:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1972329 00:04:14.967 15:13:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1972329 00:04:15.226 00:04:15.226 real 0m1.150s 00:04:15.226 user 0m1.952s 00:04:15.226 sys 0m0.429s 00:04:15.226 15:13:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.226 15:13:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.226 ************************************ 00:04:15.226 END TEST spdkcli_tcp 00:04:15.226 ************************************ 00:04:15.226 15:13:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.226 15:13:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.226 15:13:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.226 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.226 ************************************ 00:04:15.226 START TEST dpdk_mem_utility 00:04:15.226 ************************************ 00:04:15.226 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.484 * Looking for test storage... 00:04:15.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.484 15:13:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.484 --rc genhtml_branch_coverage=1 00:04:15.484 --rc genhtml_function_coverage=1 00:04:15.484 --rc genhtml_legend=1 00:04:15.484 --rc geninfo_all_blocks=1 00:04:15.484 --rc geninfo_unexecuted_blocks=1 00:04:15.484 00:04:15.484 ' 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.484 --rc genhtml_branch_coverage=1 00:04:15.484 --rc genhtml_function_coverage=1 00:04:15.484 --rc genhtml_legend=1 00:04:15.484 --rc geninfo_all_blocks=1 00:04:15.484 --rc geninfo_unexecuted_blocks=1 00:04:15.484 00:04:15.484 ' 00:04:15.484 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.484 --rc genhtml_branch_coverage=1 00:04:15.484 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 00:04:15.485 ' 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.485 --rc genhtml_branch_coverage=1 00:04:15.485 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 00:04:15.485 ' 00:04:15.485 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.485 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1972633 00:04:15.485 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1972633 00:04:15.485 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1972633 ']' 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.485 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.485 [2024-11-20 15:13:19.288743] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:15.485 [2024-11-20 15:13:19.288790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972633 ] 00:04:15.485 [2024-11-20 15:13:19.364677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.743 [2024-11-20 15:13:19.408016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.743 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.743 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:15.743 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.743 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.743 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.743 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.743 { 00:04:15.743 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.743 } 00:04:15.743 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.743 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.065 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:16.065 1 heaps totaling size 810.000000 MiB 00:04:16.065 size: 810.000000 MiB heap id: 0 00:04:16.065 end heaps---------- 00:04:16.065 9 mempools totaling size 595.772034 MiB 00:04:16.065 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:16.065 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:16.065 size: 92.545471 MiB name: bdev_io_1972633 00:04:16.065 size: 50.003479 MiB name: msgpool_1972633 00:04:16.065 size: 36.509338 MiB name: fsdev_io_1972633 00:04:16.065 size: 21.763794 MiB name: PDU_Pool 00:04:16.065 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:16.066 size: 4.133484 MiB name: evtpool_1972633 00:04:16.066 size: 0.026123 MiB name: Session_Pool 00:04:16.066 end mempools------- 00:04:16.066 6 memzones totaling size 4.142822 MiB 00:04:16.066 size: 1.000366 MiB name: RG_ring_0_1972633 00:04:16.066 size: 1.000366 MiB name: RG_ring_1_1972633 00:04:16.066 size: 1.000366 MiB name: RG_ring_4_1972633 00:04:16.066 size: 1.000366 MiB name: RG_ring_5_1972633 00:04:16.066 size: 0.125366 MiB name: RG_ring_2_1972633 00:04:16.066 size: 0.015991 MiB name: RG_ring_3_1972633 00:04:16.066 end memzones------- 00:04:16.066 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.066 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:16.066 list of free elements. size: 10.862488 MiB 00:04:16.066 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:16.066 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:16.066 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:16.066 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:16.066 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:16.066 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:16.066 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:16.066 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:16.066 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:16.066 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:16.066 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:16.066 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:16.066 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:16.066 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:16.066 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:16.066 list of standard malloc elements. size: 199.218628 MiB 00:04:16.066 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:16.066 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:16.066 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:16.066 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:16.066 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:16.066 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:16.066 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:16.066 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:16.066 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:16.066 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:16.066 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:16.066 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:16.066 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:16.066 list of memzone associated elements. size: 599.918884 MiB 00:04:16.066 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:16.066 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.066 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:16.066 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:16.066 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:16.066 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1972633_0 00:04:16.066 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:16.066 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1972633_0 00:04:16.066 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:16.066 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1972633_0 00:04:16.066 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:16.066 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:16.066 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:16.066 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.066 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:16.066 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1972633_0 00:04:16.066 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:16.066 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1972633 00:04:16.066 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:16.066 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1972633 00:04:16.066 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:16.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.066 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:16.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.066 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:16.066 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.066 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:16.066 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.066 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:16.066 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1972633 00:04:16.066 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:16.066 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1972633 00:04:16.066 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:16.066 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1972633 00:04:16.066 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:16.066 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1972633 00:04:16.067 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:16.067 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1972633 00:04:16.067 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:16.067 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1972633 00:04:16.067 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:16.067 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.067 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:16.067 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.067 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:16.067 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.067 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:16.067 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1972633 00:04:16.067 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:16.067 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1972633 00:04:16.067 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:16.067 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.067 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:16.067 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.067 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:16.067 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1972633 00:04:16.067 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:16.067 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.067 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:16.067 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1972633 00:04:16.067 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:16.067 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1972633 00:04:16.067 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:16.067 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1972633 00:04:16.067 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:16.067 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.067 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.067 15:13:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1972633 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1972633 ']' 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1972633 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972633 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972633' 00:04:16.067 killing process with pid 1972633 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1972633 00:04:16.067 15:13:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1972633 00:04:16.327 00:04:16.327 real 0m1.022s 00:04:16.327 user 0m0.982s 00:04:16.327 sys 0m0.390s 00:04:16.327 15:13:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.327 15:13:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.327 ************************************ 00:04:16.327 END TEST dpdk_mem_utility 00:04:16.327 ************************************ 00:04:16.327 15:13:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.327 15:13:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.327 15:13:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.327 15:13:20 -- common/autotest_common.sh@10 -- # set +x 00:04:16.327 ************************************ 00:04:16.327 START TEST event 00:04:16.327 ************************************ 00:04:16.327 15:13:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.586 * Looking for test storage... 00:04:16.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.586 15:13:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.586 15:13:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.586 15:13:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.586 15:13:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.586 15:13:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.586 15:13:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.586 15:13:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.586 15:13:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.586 15:13:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.586 15:13:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.586 15:13:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.586 15:13:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.586 15:13:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.586 15:13:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.586 15:13:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.586 15:13:20 event -- scripts/common.sh@344 -- # case "$op" in 00:04:16.586 15:13:20 event -- scripts/common.sh@345 -- # : 1 00:04:16.586 15:13:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.586 15:13:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.586 15:13:20 event -- scripts/common.sh@365 -- # decimal 1 00:04:16.586 15:13:20 event -- scripts/common.sh@353 -- # local d=1 00:04:16.586 15:13:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.586 15:13:20 event -- scripts/common.sh@355 -- # echo 1 00:04:16.586 15:13:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.586 15:13:20 event -- scripts/common.sh@366 -- # decimal 2 00:04:16.586 15:13:20 event -- scripts/common.sh@353 -- # local d=2 00:04:16.586 15:13:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.586 15:13:20 event -- scripts/common.sh@355 -- # echo 2 00:04:16.587 15:13:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.587 15:13:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.587 15:13:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.587 15:13:20 event -- scripts/common.sh@368 -- # return 0 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.587 --rc genhtml_branch_coverage=1 00:04:16.587 --rc genhtml_function_coverage=1 00:04:16.587 --rc genhtml_legend=1 00:04:16.587 --rc geninfo_all_blocks=1 00:04:16.587 --rc geninfo_unexecuted_blocks=1 00:04:16.587 00:04:16.587 ' 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.587 --rc genhtml_branch_coverage=1 00:04:16.587 --rc genhtml_function_coverage=1 00:04:16.587 --rc genhtml_legend=1 00:04:16.587 --rc geninfo_all_blocks=1 00:04:16.587 --rc geninfo_unexecuted_blocks=1 00:04:16.587 00:04:16.587 ' 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.587 --rc genhtml_branch_coverage=1 00:04:16.587 --rc genhtml_function_coverage=1 00:04:16.587 --rc genhtml_legend=1 00:04:16.587 --rc geninfo_all_blocks=1 00:04:16.587 --rc geninfo_unexecuted_blocks=1 00:04:16.587 00:04:16.587 ' 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.587 --rc genhtml_branch_coverage=1 00:04:16.587 --rc genhtml_function_coverage=1 00:04:16.587 --rc genhtml_legend=1 00:04:16.587 --rc geninfo_all_blocks=1 00:04:16.587 --rc geninfo_unexecuted_blocks=1 00:04:16.587 00:04:16.587 ' 00:04:16.587 15:13:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:16.587 15:13:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.587 15:13:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:16.587 15:13:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.587 15:13:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.587 ************************************ 00:04:16.587 START TEST event_perf 00:04:16.587 ************************************ 00:04:16.587 15:13:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.587 Running I/O for 1 seconds...[2024-11-20 15:13:20.383692] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:16.587 [2024-11-20 15:13:20.383749] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972923 ] 00:04:16.587 [2024-11-20 15:13:20.460603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.846 [2024-11-20 15:13:20.505803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.846 [2024-11-20 15:13:20.505912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.846 [2024-11-20 15:13:20.506017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.846 [2024-11-20 15:13:20.506017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.784 Running I/O for 1 seconds... 00:04:17.784 lcore 0: 203259 00:04:17.784 lcore 1: 203259 00:04:17.784 lcore 2: 203258 00:04:17.784 lcore 3: 203258 00:04:17.784 done. 00:04:17.784 00:04:17.784 real 0m1.183s 00:04:17.784 user 0m4.103s 00:04:17.784 sys 0m0.076s 00:04:17.784 15:13:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.784 15:13:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:17.784 ************************************ 00:04:17.784 END TEST event_perf 00:04:17.784 ************************************ 00:04:17.784 15:13:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.784 15:13:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:17.784 15:13:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.784 15:13:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.784 ************************************ 00:04:17.784 START TEST event_reactor 00:04:17.784 ************************************ 00:04:17.784 15:13:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.784 [2024-11-20 15:13:21.634752] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:17.784 [2024-11-20 15:13:21.634823] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973173 ] 00:04:18.044 [2024-11-20 15:13:21.711273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.044 [2024-11-20 15:13:21.751417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.982 test_start 00:04:18.982 oneshot 00:04:18.982 tick 100 00:04:18.982 tick 100 00:04:18.982 tick 250 00:04:18.982 tick 100 00:04:18.982 tick 100 00:04:18.982 tick 250 00:04:18.982 tick 100 00:04:18.982 tick 500 00:04:18.982 tick 100 00:04:18.982 tick 100 00:04:18.982 tick 250 00:04:18.982 tick 100 00:04:18.982 tick 100 00:04:18.982 test_end 00:04:18.982 00:04:18.982 real 0m1.173s 00:04:18.982 user 0m1.100s 00:04:18.982 sys 0m0.069s 00:04:18.982 15:13:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.982 15:13:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:18.982 ************************************ 00:04:18.982 END TEST event_reactor 00:04:18.982 ************************************ 00:04:18.982 15:13:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:18.982 15:13:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:18.982 15:13:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.982 15:13:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.982 ************************************ 00:04:18.982 START TEST event_reactor_perf 00:04:18.982 ************************************ 00:04:18.982 15:13:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:18.982 [2024-11-20 15:13:22.878720] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:18.982 [2024-11-20 15:13:22.878793] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973427 ] 00:04:19.241 [2024-11-20 15:13:22.955812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.241 [2024-11-20 15:13:22.995869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.178 test_start 00:04:20.179 test_end 00:04:20.179 Performance: 504127 events per second 00:04:20.179 00:04:20.179 real 0m1.174s 00:04:20.179 user 0m1.099s 00:04:20.179 sys 0m0.071s 00:04:20.179 15:13:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.179 15:13:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.179 ************************************ 00:04:20.179 END TEST event_reactor_perf 00:04:20.179 ************************************ 00:04:20.179 15:13:24 event -- event/event.sh@49 -- # uname -s 00:04:20.179 15:13:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:20.179 15:13:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.179 15:13:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.179 15:13:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.179 15:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.438 ************************************ 00:04:20.438 START TEST event_scheduler 00:04:20.438 ************************************ 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.438 * Looking for test storage... 00:04:20.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.438 15:13:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.438 --rc genhtml_branch_coverage=1 00:04:20.438 --rc genhtml_function_coverage=1 00:04:20.438 --rc genhtml_legend=1 00:04:20.438 --rc geninfo_all_blocks=1 00:04:20.438 --rc geninfo_unexecuted_blocks=1 00:04:20.438 00:04:20.438 ' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.438 --rc genhtml_branch_coverage=1 00:04:20.438 --rc genhtml_function_coverage=1 00:04:20.438 --rc genhtml_legend=1 00:04:20.438 --rc geninfo_all_blocks=1 00:04:20.438 --rc geninfo_unexecuted_blocks=1 00:04:20.438 00:04:20.438 ' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.438 --rc genhtml_branch_coverage=1 00:04:20.438 --rc genhtml_function_coverage=1 00:04:20.438 --rc genhtml_legend=1 00:04:20.438 --rc geninfo_all_blocks=1 00:04:20.438 --rc geninfo_unexecuted_blocks=1 00:04:20.438 00:04:20.438 ' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.438 --rc genhtml_branch_coverage=1 00:04:20.438 --rc genhtml_function_coverage=1 00:04:20.438 --rc genhtml_legend=1 00:04:20.438 --rc geninfo_all_blocks=1 00:04:20.438 --rc geninfo_unexecuted_blocks=1 00:04:20.438 00:04:20.438 ' 00:04:20.438 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:20.438 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1973710 00:04:20.438 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:20.438 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.438 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1973710 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1973710 ']' 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.438 15:13:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.438 [2024-11-20 15:13:24.320284] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:20.438 [2024-11-20 15:13:24.320332] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973710 ] 00:04:20.698 [2024-11-20 15:13:24.393227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.698 [2024-11-20 15:13:24.436615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.698 [2024-11-20 15:13:24.436727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.698 [2024-11-20 15:13:24.436834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.698 [2024-11-20 15:13:24.436834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:20.698 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.698 [2024-11-20 15:13:24.497375] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:20.698 [2024-11-20 15:13:24.497392] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:20.698 [2024-11-20 15:13:24.497401] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:20.698 [2024-11-20 15:13:24.497407] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:20.698 [2024-11-20 15:13:24.497412] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.698 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.698 [2024-11-20 15:13:24.571255] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.698 15:13:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.698 15:13:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 ************************************ 00:04:20.958 START TEST scheduler_create_thread 00:04:20.958 ************************************ 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 2 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 3 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 4 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 5 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 6 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 7 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 8 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 9 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 10 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.958 15:13:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.895 15:13:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.895 15:13:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:21.895 15:13:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.895 15:13:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.446 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.446 15:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:23.446 15:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:23.446 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.446 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.382 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.382 00:04:24.382 real 0m3.381s 00:04:24.382 user 0m0.022s 00:04:24.382 sys 0m0.007s 00:04:24.382 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.382 15:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.382 ************************************ 00:04:24.382 END TEST scheduler_create_thread 00:04:24.382 ************************************ 00:04:24.382 15:13:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:24.382 15:13:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1973710 00:04:24.382 15:13:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1973710 ']' 00:04:24.382 15:13:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1973710 00:04:24.382 15:13:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:24.382 15:13:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.382 15:13:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973710 00:04:24.383 15:13:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:24.383 15:13:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:24.383 15:13:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973710' 00:04:24.383 killing process with pid 1973710 00:04:24.383 15:13:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1973710 00:04:24.383 15:13:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1973710 00:04:24.641 [2024-11-20 15:13:28.371375] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:24.901 00:04:24.901 real 0m4.467s 00:04:24.901 user 0m7.840s 00:04:24.901 sys 0m0.400s 00:04:24.901 15:13:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.901 15:13:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.901 ************************************ 00:04:24.901 END TEST event_scheduler 00:04:24.901 ************************************ 00:04:24.901 15:13:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:24.901 15:13:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:24.901 15:13:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.901 15:13:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.901 15:13:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.901 ************************************ 00:04:24.901 START TEST app_repeat 00:04:24.901 ************************************ 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1974462 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1974462' 00:04:24.901 Process app_repeat pid: 1974462 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:24.901 spdk_app_start Round 0 00:04:24.901 15:13:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1974462 /var/tmp/spdk-nbd.sock 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1974462 ']' 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.901 15:13:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.901 [2024-11-20 15:13:28.678028] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:24.901 [2024-11-20 15:13:28.678089] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974462 ] 00:04:24.901 [2024-11-20 15:13:28.752037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.901 [2024-11-20 15:13:28.793101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.901 [2024-11-20 15:13:28.793102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.160 15:13:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.160 15:13:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:25.160 15:13:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.419 Malloc0 00:04:25.419 15:13:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.419 Malloc1 00:04:25.419 15:13:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.419 15:13:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.677 /dev/nbd0 00:04:25.677 15:13:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.677 15:13:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.677 1+0 records in 00:04:25.677 1+0 records out 00:04:25.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193338 s, 21.2 MB/s 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.677 15:13:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.677 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.677 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.677 15:13:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.936 /dev/nbd1 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.936 1+0 records in 00:04:25.936 1+0 records out 00:04:25.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230276 s, 17.8 MB/s 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.936 15:13:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.936 15:13:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.196 { 00:04:26.196 "nbd_device": "/dev/nbd0", 00:04:26.196 "bdev_name": "Malloc0" 00:04:26.196 }, 00:04:26.196 { 00:04:26.196 "nbd_device": "/dev/nbd1", 00:04:26.196 "bdev_name": "Malloc1" 00:04:26.196 } 00:04:26.196 ]' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.196 { 00:04:26.196 "nbd_device": "/dev/nbd0", 00:04:26.196 "bdev_name": "Malloc0" 00:04:26.196 }, 00:04:26.196 { 00:04:26.196 "nbd_device": "/dev/nbd1", 00:04:26.196 "bdev_name": "Malloc1" 00:04:26.196 } 00:04:26.196 ]' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.196 /dev/nbd1' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.196 /dev/nbd1' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.196 256+0 records in 00:04:26.196 256+0 records out 00:04:26.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362515 s, 289 MB/s 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.196 256+0 records in 00:04:26.196 256+0 records out 00:04:26.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146356 s, 71.6 MB/s 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.196 15:13:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.455 256+0 records in 00:04:26.455 256+0 records out 00:04:26.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148168 s, 70.8 MB/s 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.455 15:13:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.713 15:13:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.973 15:13:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.973 15:13:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.232 15:13:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.490 [2024-11-20 15:13:31.187838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.490 [2024-11-20 15:13:31.225116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.490 [2024-11-20 15:13:31.225117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.490 [2024-11-20 15:13:31.266308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.490 [2024-11-20 15:13:31.266354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:30.780 spdk_app_start Round 1 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1974462 /var/tmp/spdk-nbd.sock 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1974462 ']' 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.780 15:13:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.780 Malloc0 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.780 Malloc1 00:04:30.780 15:13:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.780 15:13:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.039 /dev/nbd0 00:04:31.039 15:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.039 15:13:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.039 1+0 records in 00:04:31.039 1+0 records out 00:04:31.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188962 s, 21.7 MB/s 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:31.039 15:13:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:31.039 15:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.039 15:13:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.039 15:13:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.299 /dev/nbd1 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.299 1+0 records in 00:04:31.299 1+0 records out 00:04:31.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245659 s, 16.7 MB/s 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:31.299 15:13:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.299 15:13:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.559 { 00:04:31.559 "nbd_device": "/dev/nbd0", 00:04:31.559 "bdev_name": "Malloc0" 00:04:31.559 }, 00:04:31.559 { 00:04:31.559 "nbd_device": "/dev/nbd1", 00:04:31.559 "bdev_name": "Malloc1" 00:04:31.559 } 00:04:31.559 ]' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.559 { 00:04:31.559 "nbd_device": "/dev/nbd0", 00:04:31.559 "bdev_name": "Malloc0" 00:04:31.559 }, 00:04:31.559 { 00:04:31.559 "nbd_device": "/dev/nbd1", 00:04:31.559 "bdev_name": "Malloc1" 00:04:31.559 } 00:04:31.559 ]' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.559 /dev/nbd1' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.559 /dev/nbd1' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.559 256+0 records in 00:04:31.559 256+0 records out 00:04:31.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100033 s, 105 MB/s 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.559 256+0 records in 00:04:31.559 256+0 records out 00:04:31.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143292 s, 73.2 MB/s 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.559 256+0 records in 00:04:31.559 256+0 records out 00:04:31.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155798 s, 67.3 MB/s 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.559 15:13:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.818 15:13:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.077 15:13:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.336 15:13:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.336 15:13:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.595 15:13:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.854 [2024-11-20 15:13:36.526919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.854 [2024-11-20 15:13:36.564921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.854 [2024-11-20 15:13:36.564921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.854 [2024-11-20 15:13:36.606777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.854 [2024-11-20 15:13:36.606819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:36.145 spdk_app_start Round 2 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1974462 /var/tmp/spdk-nbd.sock 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1974462 ']' 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.145 15:13:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.145 Malloc0 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.145 Malloc1 00:04:36.145 15:13:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.145 15:13:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.145 15:13:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:36.146 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.146 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.146 15:13:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.404 /dev/nbd0 00:04:36.404 15:13:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.404 15:13:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.404 15:13:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:36.404 15:13:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:36.404 15:13:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.405 1+0 records in 00:04:36.405 1+0 records out 00:04:36.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213152 s, 19.2 MB/s 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:36.405 15:13:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:36.405 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.405 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.405 15:13:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.663 /dev/nbd1 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.663 1+0 records in 00:04:36.663 1+0 records out 00:04:36.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218868 s, 18.7 MB/s 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:36.663 15:13:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.663 15:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.922 { 00:04:36.922 "nbd_device": "/dev/nbd0", 00:04:36.922 "bdev_name": "Malloc0" 00:04:36.922 }, 00:04:36.922 { 00:04:36.922 "nbd_device": "/dev/nbd1", 00:04:36.922 "bdev_name": "Malloc1" 00:04:36.922 } 00:04:36.922 ]' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.922 { 00:04:36.922 "nbd_device": "/dev/nbd0", 00:04:36.922 "bdev_name": "Malloc0" 00:04:36.922 }, 00:04:36.922 { 00:04:36.922 "nbd_device": "/dev/nbd1", 00:04:36.922 "bdev_name": "Malloc1" 00:04:36.922 } 00:04:36.922 ]' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.922 /dev/nbd1' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.922 /dev/nbd1' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.922 256+0 records in 00:04:36.922 256+0 records out 00:04:36.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106242 s, 98.7 MB/s 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.922 256+0 records in 00:04:36.922 256+0 records out 00:04:36.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143925 s, 72.9 MB/s 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.922 256+0 records in 00:04:36.922 256+0 records out 00:04:36.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151206 s, 69.3 MB/s 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.922 15:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.182 15:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.182 15:13:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.441 15:13:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.700 15:13:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.700 15:13:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.959 15:13:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.218 [2024-11-20 15:13:41.873005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.218 [2024-11-20 15:13:41.911372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.218 [2024-11-20 15:13:41.911372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.218 [2024-11-20 15:13:41.952783] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.218 [2024-11-20 15:13:41.952825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.505 15:13:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1974462 /var/tmp/spdk-nbd.sock 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1974462 ']' 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:41.505 15:13:44 event.app_repeat -- event/event.sh@39 -- # killprocess 1974462 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1974462 ']' 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1974462 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974462 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974462' 00:04:41.505 killing process with pid 1974462 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1974462 00:04:41.505 15:13:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1974462 00:04:41.505 spdk_app_start is called in Round 0. 00:04:41.505 Shutdown signal received, stop current app iteration 00:04:41.505 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:04:41.505 spdk_app_start is called in Round 1. 00:04:41.505 Shutdown signal received, stop current app iteration 00:04:41.505 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:04:41.505 spdk_app_start is called in Round 2. 00:04:41.505 Shutdown signal received, stop current app iteration 00:04:41.505 Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 reinitialization... 00:04:41.505 spdk_app_start is called in Round 3. 00:04:41.505 Shutdown signal received, stop current app iteration 00:04:41.505 15:13:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:41.505 15:13:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:41.505 00:04:41.505 real 0m16.478s 00:04:41.505 user 0m36.317s 00:04:41.505 sys 0m2.532s 00:04:41.505 15:13:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.505 15:13:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.505 ************************************ 00:04:41.505 END TEST app_repeat 00:04:41.505 ************************************ 00:04:41.505 15:13:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:41.505 15:13:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.505 15:13:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.505 15:13:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.505 15:13:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.505 ************************************ 00:04:41.505 START TEST cpu_locks 00:04:41.505 ************************************ 00:04:41.505 15:13:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.505 * Looking for test storage... 00:04:41.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:41.505 15:13:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.505 15:13:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.505 15:13:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.506 15:13:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.506 --rc genhtml_branch_coverage=1 00:04:41.506 --rc genhtml_function_coverage=1 00:04:41.506 --rc genhtml_legend=1 00:04:41.506 --rc geninfo_all_blocks=1 00:04:41.506 --rc geninfo_unexecuted_blocks=1 00:04:41.506 00:04:41.506 ' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.506 --rc genhtml_branch_coverage=1 00:04:41.506 --rc genhtml_function_coverage=1 00:04:41.506 --rc genhtml_legend=1 00:04:41.506 --rc geninfo_all_blocks=1 00:04:41.506 --rc geninfo_unexecuted_blocks=1 00:04:41.506 00:04:41.506 ' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.506 --rc genhtml_branch_coverage=1 00:04:41.506 --rc genhtml_function_coverage=1 00:04:41.506 --rc genhtml_legend=1 00:04:41.506 --rc geninfo_all_blocks=1 00:04:41.506 --rc geninfo_unexecuted_blocks=1 00:04:41.506 00:04:41.506 ' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.506 --rc genhtml_branch_coverage=1 00:04:41.506 --rc genhtml_function_coverage=1 00:04:41.506 --rc genhtml_legend=1 00:04:41.506 --rc geninfo_all_blocks=1 00:04:41.506 --rc geninfo_unexecuted_blocks=1 00:04:41.506 00:04:41.506 ' 00:04:41.506 15:13:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:41.506 15:13:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:41.506 15:13:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:41.506 15:13:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.506 15:13:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.506 ************************************ 00:04:41.506 START TEST default_locks 00:04:41.506 ************************************ 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1977464 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1977464 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1977464 ']' 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.506 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.766 [2024-11-20 15:13:45.449132] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:41.766 [2024-11-20 15:13:45.449176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977464 ] 00:04:41.766 [2024-11-20 15:13:45.522819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.766 [2024-11-20 15:13:45.562908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.024 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.024 15:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:42.024 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1977464 00:04:42.024 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1977464 00:04:42.024 15:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.283 lslocks: write error 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1977464 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1977464 ']' 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1977464 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977464 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977464' 00:04:42.283 killing process with pid 1977464 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1977464 00:04:42.283 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1977464 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1977464 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1977464 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1977464 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1977464 ']' 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1977464) - No such process 00:04:42.543 ERROR: process (pid: 1977464) is no longer running 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.543 00:04:42.543 real 0m0.987s 00:04:42.543 user 0m0.925s 00:04:42.543 sys 0m0.469s 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.543 15:13:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.543 ************************************ 00:04:42.543 END TEST default_locks 00:04:42.543 ************************************ 00:04:42.543 15:13:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:42.543 15:13:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.543 15:13:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.543 15:13:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.803 ************************************ 00:04:42.803 START TEST default_locks_via_rpc 00:04:42.803 ************************************ 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1977722 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1977722 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1977722 ']' 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.803 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.803 [2024-11-20 15:13:46.504586] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:42.803 [2024-11-20 15:13:46.504626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977722 ] 00:04:42.803 [2024-11-20 15:13:46.581183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.803 [2024-11-20 15:13:46.623550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:43.062 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.063 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.063 15:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.063 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1977722 00:04:43.063 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1977722 00:04:43.063 15:13:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1977722 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1977722 ']' 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1977722 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.322 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977722 00:04:43.581 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.581 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.581 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977722' 00:04:43.581 killing process with pid 1977722 00:04:43.581 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1977722 00:04:43.581 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1977722 00:04:43.841 00:04:43.841 real 0m1.114s 00:04:43.841 user 0m1.059s 00:04:43.841 sys 0m0.516s 00:04:43.841 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.841 15:13:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 END TEST default_locks_via_rpc 00:04:43.841 ************************************ 00:04:43.841 15:13:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:43.841 15:13:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.841 15:13:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.841 15:13:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 START TEST non_locking_app_on_locked_coremask 00:04:43.841 ************************************ 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1977976 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1977976 /var/tmp/spdk.sock 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1977976 ']' 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.841 15:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 [2024-11-20 15:13:47.686416] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:43.841 [2024-11-20 15:13:47.686458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977976 ] 00:04:44.100 [2024-11-20 15:13:47.761586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.100 [2024-11-20 15:13:47.803832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1977982 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1977982 /var/tmp/spdk2.sock 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1977982 ']' 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.359 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.359 [2024-11-20 15:13:48.063499] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:44.359 [2024-11-20 15:13:48.063548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977982 ] 00:04:44.359 [2024-11-20 15:13:48.151124] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.359 [2024-11-20 15:13:48.151146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.359 [2024-11-20 15:13:48.235972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.295 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.295 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.295 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1977976 00:04:45.295 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1977976 00:04:45.295 15:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.862 lslocks: write error 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1977976 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1977976 ']' 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1977976 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977976 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977976' 00:04:45.862 killing process with pid 1977976 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1977976 00:04:45.862 15:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1977976 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1977982 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1977982 ']' 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1977982 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977982 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977982' 00:04:46.430 killing process with pid 1977982 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1977982 00:04:46.430 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1977982 00:04:46.689 00:04:46.689 real 0m2.887s 00:04:46.689 user 0m3.029s 00:04:46.689 sys 0m0.959s 00:04:46.689 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.689 15:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.689 ************************************ 00:04:46.689 END TEST non_locking_app_on_locked_coremask 00:04:46.689 ************************************ 00:04:46.689 15:13:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:46.689 15:13:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.689 15:13:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.689 15:13:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.689 ************************************ 00:04:46.689 START TEST locking_app_on_unlocked_coremask 00:04:46.689 ************************************ 00:04:46.689 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:46.689 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1978476 00:04:46.689 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1978476 /var/tmp/spdk.sock 00:04:46.689 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:46.689 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1978476 ']' 00:04:46.690 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.690 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.690 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.690 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.690 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.949 [2024-11-20 15:13:50.643099] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:46.949 [2024-11-20 15:13:50.643144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978476 ] 00:04:46.949 [2024-11-20 15:13:50.718616] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.949 [2024-11-20 15:13:50.718640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.949 [2024-11-20 15:13:50.756812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.208 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1978483 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1978483 /var/tmp/spdk2.sock 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1978483 ']' 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.209 15:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.209 [2024-11-20 15:13:51.032310] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:47.209 [2024-11-20 15:13:51.032358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978483 ] 00:04:47.468 [2024-11-20 15:13:51.122658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.468 [2024-11-20 15:13:51.203556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.036 15:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.036 15:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:48.036 15:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1978483 00:04:48.036 15:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1978483 00:04:48.036 15:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.604 lslocks: write error 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1978476 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1978476 ']' 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1978476 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978476 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978476' 00:04:48.604 killing process with pid 1978476 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1978476 00:04:48.604 15:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1978476 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1978483 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1978483 ']' 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1978483 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.172 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978483 00:04:49.431 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.431 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.431 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978483' 00:04:49.431 killing process with pid 1978483 00:04:49.431 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1978483 00:04:49.431 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1978483 00:04:49.692 00:04:49.692 real 0m2.789s 00:04:49.692 user 0m2.936s 00:04:49.692 sys 0m0.947s 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.692 ************************************ 00:04:49.692 END TEST locking_app_on_unlocked_coremask 00:04:49.692 ************************************ 00:04:49.692 15:13:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:49.692 15:13:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.692 15:13:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.692 15:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.692 ************************************ 00:04:49.692 START TEST locking_app_on_locked_coremask 00:04:49.692 ************************************ 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1978975 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1978975 /var/tmp/spdk.sock 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1978975 ']' 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.692 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.692 [2024-11-20 15:13:53.502436] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:49.692 [2024-11-20 15:13:53.502478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978975 ] 00:04:49.692 [2024-11-20 15:13:53.579439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.951 [2024-11-20 15:13:53.622903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.951 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1978984 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1978984 /var/tmp/spdk2.sock 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1978984 /var/tmp/spdk2.sock 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1978984 /var/tmp/spdk2.sock 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1978984 ']' 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.952 15:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.211 [2024-11-20 15:13:53.886117] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:50.211 [2024-11-20 15:13:53.886165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978984 ] 00:04:50.211 [2024-11-20 15:13:53.970055] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1978975 has claimed it. 00:04:50.211 [2024-11-20 15:13:53.970085] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:50.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1978984) - No such process 00:04:50.778 ERROR: process (pid: 1978984) is no longer running 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1978975 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1978975 00:04:50.778 15:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.347 lslocks: write error 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1978975 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1978975 ']' 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1978975 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978975 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978975' 00:04:51.347 killing process with pid 1978975 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1978975 00:04:51.347 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1978975 00:04:51.606 00:04:51.606 real 0m1.957s 00:04:51.606 user 0m2.083s 00:04:51.606 sys 0m0.659s 00:04:51.606 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.606 15:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.606 ************************************ 00:04:51.606 END TEST locking_app_on_locked_coremask 00:04:51.606 ************************************ 00:04:51.606 15:13:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:51.606 15:13:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.606 15:13:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.606 15:13:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.606 ************************************ 00:04:51.606 START TEST locking_overlapped_coremask 00:04:51.606 ************************************ 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1979284 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1979284 /var/tmp/spdk.sock 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1979284 ']' 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.606 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.865 [2024-11-20 15:13:55.531419] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:51.865 [2024-11-20 15:13:55.531467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979284 ] 00:04:51.865 [2024-11-20 15:13:55.605512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.865 [2024-11-20 15:13:55.651738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.865 [2024-11-20 15:13:55.651844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.865 [2024-11-20 15:13:55.651844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1979472 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1979472 /var/tmp/spdk2.sock 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1979472 /var/tmp/spdk2.sock 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1979472 /var/tmp/spdk2.sock 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1979472 ']' 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.125 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.126 15:13:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 [2024-11-20 15:13:55.928591] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:52.126 [2024-11-20 15:13:55.928641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979472 ] 00:04:52.126 [2024-11-20 15:13:56.020080] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1979284 has claimed it. 00:04:52.126 [2024-11-20 15:13:56.020121] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:52.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1979472) - No such process 00:04:52.694 ERROR: process (pid: 1979472) is no longer running 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1979284 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1979284 ']' 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1979284 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.694 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979284 00:04:52.954 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.954 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.954 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979284' 00:04:52.954 killing process with pid 1979284 00:04:52.954 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1979284 00:04:52.954 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1979284 00:04:53.212 00:04:53.212 real 0m1.429s 00:04:53.212 user 0m3.936s 00:04:53.212 sys 0m0.382s 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.212 ************************************ 00:04:53.212 END TEST locking_overlapped_coremask 00:04:53.212 ************************************ 00:04:53.212 15:13:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:53.212 15:13:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.212 15:13:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.212 15:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.212 ************************************ 00:04:53.212 START TEST locking_overlapped_coremask_via_rpc 00:04:53.212 ************************************ 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1979630 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1979630 /var/tmp/spdk.sock 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1979630 ']' 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.212 15:13:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.212 [2024-11-20 15:13:57.034432] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:53.212 [2024-11-20 15:13:57.034477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979630 ] 00:04:53.212 [2024-11-20 15:13:57.110747] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.212 [2024-11-20 15:13:57.110776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.470 [2024-11-20 15:13:57.156183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.470 [2024-11-20 15:13:57.156288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.470 [2024-11-20 15:13:57.156289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.470 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1979733 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1979733 /var/tmp/spdk2.sock 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1979733 ']' 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.471 15:13:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.729 [2024-11-20 15:13:57.416220] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:53.729 [2024-11-20 15:13:57.416265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979733 ] 00:04:53.729 [2024-11-20 15:13:57.509106] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.729 [2024-11-20 15:13:57.509136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.729 [2024-11-20 15:13:57.597132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.729 [2024-11-20 15:13:57.597251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.729 [2024-11-20 15:13:57.597252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.666 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.666 [2024-11-20 15:13:58.276021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1979630 has claimed it. 00:04:54.667 request: 00:04:54.667 { 00:04:54.667 "method": "framework_enable_cpumask_locks", 00:04:54.667 "req_id": 1 00:04:54.667 } 00:04:54.667 Got JSON-RPC error response 00:04:54.667 response: 00:04:54.667 { 00:04:54.667 "code": -32603, 00:04:54.667 "message": "Failed to claim CPU core: 2" 00:04:54.667 } 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1979630 /var/tmp/spdk.sock 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1979630 ']' 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1979733 /var/tmp/spdk2.sock 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1979733 ']' 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.667 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:54.926 00:04:54.926 real 0m1.721s 00:04:54.926 user 0m0.828s 00:04:54.926 sys 0m0.135s 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.926 15:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.926 ************************************ 00:04:54.926 END TEST locking_overlapped_coremask_via_rpc 00:04:54.926 ************************************ 00:04:54.926 15:13:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:54.926 15:13:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1979630 ]] 00:04:54.926 15:13:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1979630 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1979630 ']' 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1979630 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979630 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979630' 00:04:54.926 killing process with pid 1979630 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1979630 00:04:54.926 15:13:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1979630 00:04:55.495 15:13:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1979733 ]] 00:04:55.495 15:13:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1979733 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1979733 ']' 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1979733 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979733 00:04:55.495 15:13:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:55.496 15:13:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:55.496 15:13:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979733' 00:04:55.496 killing process with pid 1979733 00:04:55.496 15:13:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1979733 00:04:55.496 15:13:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1979733 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1979630 ]] 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1979630 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1979630 ']' 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1979630 00:04:55.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1979630) - No such process 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1979630 is not found' 00:04:55.756 Process with pid 1979630 is not found 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1979733 ]] 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1979733 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1979733 ']' 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1979733 00:04:55.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1979733) - No such process 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1979733 is not found' 00:04:55.756 Process with pid 1979733 is not found 00:04:55.756 15:13:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:55.756 00:04:55.756 real 0m14.289s 00:04:55.756 user 0m24.624s 00:04:55.756 sys 0m5.026s 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.756 15:13:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 ************************************ 00:04:55.756 END TEST cpu_locks 00:04:55.756 ************************************ 00:04:55.756 00:04:55.756 real 0m39.355s 00:04:55.756 user 1m15.344s 00:04:55.756 sys 0m8.541s 00:04:55.756 15:13:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.756 15:13:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 ************************************ 00:04:55.756 END TEST event 00:04:55.756 ************************************ 00:04:55.756 15:13:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:55.756 15:13:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.756 15:13:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.756 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 ************************************ 00:04:55.756 START TEST thread 00:04:55.756 ************************************ 00:04:55.756 15:13:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:56.015 * Looking for test storage... 00:04:56.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.016 15:13:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.016 15:13:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.016 15:13:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.016 15:13:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.016 15:13:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.016 15:13:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.016 15:13:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.016 15:13:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.016 15:13:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.016 15:13:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.016 15:13:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.016 15:13:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:56.016 15:13:59 thread -- scripts/common.sh@345 -- # : 1 00:04:56.016 15:13:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.016 15:13:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.016 15:13:59 thread -- scripts/common.sh@365 -- # decimal 1 00:04:56.016 15:13:59 thread -- scripts/common.sh@353 -- # local d=1 00:04:56.016 15:13:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.016 15:13:59 thread -- scripts/common.sh@355 -- # echo 1 00:04:56.016 15:13:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.016 15:13:59 thread -- scripts/common.sh@366 -- # decimal 2 00:04:56.016 15:13:59 thread -- scripts/common.sh@353 -- # local d=2 00:04:56.016 15:13:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.016 15:13:59 thread -- scripts/common.sh@355 -- # echo 2 00:04:56.016 15:13:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.016 15:13:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.016 15:13:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.016 15:13:59 thread -- scripts/common.sh@368 -- # return 0 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.016 --rc genhtml_branch_coverage=1 00:04:56.016 --rc genhtml_function_coverage=1 00:04:56.016 --rc genhtml_legend=1 00:04:56.016 --rc geninfo_all_blocks=1 00:04:56.016 --rc geninfo_unexecuted_blocks=1 00:04:56.016 00:04:56.016 ' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.016 --rc genhtml_branch_coverage=1 00:04:56.016 --rc genhtml_function_coverage=1 00:04:56.016 --rc genhtml_legend=1 00:04:56.016 --rc geninfo_all_blocks=1 00:04:56.016 --rc geninfo_unexecuted_blocks=1 00:04:56.016 00:04:56.016 ' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.016 --rc genhtml_branch_coverage=1 00:04:56.016 --rc genhtml_function_coverage=1 00:04:56.016 --rc genhtml_legend=1 00:04:56.016 --rc geninfo_all_blocks=1 00:04:56.016 --rc geninfo_unexecuted_blocks=1 00:04:56.016 00:04:56.016 ' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.016 --rc genhtml_branch_coverage=1 00:04:56.016 --rc genhtml_function_coverage=1 00:04:56.016 --rc genhtml_legend=1 00:04:56.016 --rc geninfo_all_blocks=1 00:04:56.016 --rc geninfo_unexecuted_blocks=1 00:04:56.016 00:04:56.016 ' 00:04:56.016 15:13:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.016 15:13:59 thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.016 ************************************ 00:04:56.016 START TEST thread_poller_perf 00:04:56.016 ************************************ 00:04:56.016 15:13:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.016 [2024-11-20 15:13:59.820304] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:56.016 [2024-11-20 15:13:59.820374] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980241 ] 00:04:56.016 [2024-11-20 15:13:59.900767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.275 [2024-11-20 15:13:59.942558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.275 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:57.212 [2024-11-20T14:14:01.120Z] ====================================== 00:04:57.212 [2024-11-20T14:14:01.120Z] busy:2307806286 (cyc) 00:04:57.212 [2024-11-20T14:14:01.120Z] total_run_count: 407000 00:04:57.212 [2024-11-20T14:14:01.120Z] tsc_hz: 2300000000 (cyc) 00:04:57.212 [2024-11-20T14:14:01.120Z] ====================================== 00:04:57.212 [2024-11-20T14:14:01.120Z] poller_cost: 5670 (cyc), 2465 (nsec) 00:04:57.212 00:04:57.212 real 0m1.190s 00:04:57.212 user 0m1.108s 00:04:57.212 sys 0m0.077s 00:04:57.212 15:14:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.212 15:14:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.212 ************************************ 00:04:57.212 END TEST thread_poller_perf 00:04:57.212 ************************************ 00:04:57.212 15:14:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:57.212 15:14:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:57.212 15:14:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.212 15:14:01 thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.212 ************************************ 00:04:57.212 START TEST thread_poller_perf 00:04:57.212 ************************************ 00:04:57.212 15:14:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:57.212 [2024-11-20 15:14:01.079540] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:57.212 [2024-11-20 15:14:01.079597] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980417 ] 00:04:57.471 [2024-11-20 15:14:01.156090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.471 [2024-11-20 15:14:01.196873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.471 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:58.408 [2024-11-20T14:14:02.316Z] ====================================== 00:04:58.408 [2024-11-20T14:14:02.316Z] busy:2301529436 (cyc) 00:04:58.408 [2024-11-20T14:14:02.316Z] total_run_count: 5435000 00:04:58.408 [2024-11-20T14:14:02.316Z] tsc_hz: 2300000000 (cyc) 00:04:58.408 [2024-11-20T14:14:02.316Z] ====================================== 00:04:58.408 [2024-11-20T14:14:02.316Z] poller_cost: 423 (cyc), 183 (nsec) 00:04:58.408 00:04:58.408 real 0m1.177s 00:04:58.408 user 0m1.099s 00:04:58.408 sys 0m0.074s 00:04:58.408 15:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.408 15:14:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 ************************************ 00:04:58.408 END TEST thread_poller_perf 00:04:58.408 ************************************ 00:04:58.408 15:14:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:58.408 00:04:58.408 real 0m2.685s 00:04:58.408 user 0m2.344s 00:04:58.408 sys 0m0.355s 00:04:58.408 15:14:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.408 15:14:02 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.408 ************************************ 00:04:58.408 END TEST thread 00:04:58.408 ************************************ 00:04:58.408 15:14:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:58.408 15:14:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:58.408 15:14:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.408 15:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.408 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.668 ************************************ 00:04:58.668 START TEST app_cmdline 00:04:58.668 ************************************ 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:58.668 * Looking for test storage... 00:04:58.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.668 15:14:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.668 --rc genhtml_branch_coverage=1 00:04:58.668 --rc genhtml_function_coverage=1 00:04:58.668 --rc genhtml_legend=1 00:04:58.668 --rc geninfo_all_blocks=1 00:04:58.668 --rc geninfo_unexecuted_blocks=1 00:04:58.668 00:04:58.668 ' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.668 --rc genhtml_branch_coverage=1 00:04:58.668 --rc genhtml_function_coverage=1 00:04:58.668 --rc genhtml_legend=1 00:04:58.668 --rc geninfo_all_blocks=1 00:04:58.668 --rc geninfo_unexecuted_blocks=1 00:04:58.668 00:04:58.668 ' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.668 --rc genhtml_branch_coverage=1 00:04:58.668 --rc genhtml_function_coverage=1 00:04:58.668 --rc genhtml_legend=1 00:04:58.668 --rc geninfo_all_blocks=1 00:04:58.668 --rc geninfo_unexecuted_blocks=1 00:04:58.668 00:04:58.668 ' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.668 --rc genhtml_branch_coverage=1 00:04:58.668 --rc genhtml_function_coverage=1 00:04:58.668 --rc genhtml_legend=1 00:04:58.668 --rc geninfo_all_blocks=1 00:04:58.668 --rc geninfo_unexecuted_blocks=1 00:04:58.668 00:04:58.668 ' 00:04:58.668 15:14:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:58.668 15:14:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1980745 00:04:58.668 15:14:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:58.668 15:14:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1980745 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1980745 ']' 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.668 15:14:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:58.668 [2024-11-20 15:14:02.571006] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:04:58.668 [2024-11-20 15:14:02.571054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980745 ] 00:04:58.927 [2024-11-20 15:14:02.643790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.927 [2024-11-20 15:14:02.686580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.186 15:14:02 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.186 15:14:02 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:59.186 15:14:02 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:59.186 { 00:04:59.186 "version": "SPDK v25.01-pre git sha1 ede20dc4e", 00:04:59.186 "fields": { 00:04:59.186 "major": 25, 00:04:59.186 "minor": 1, 00:04:59.186 "patch": 0, 00:04:59.186 "suffix": "-pre", 00:04:59.186 "commit": "ede20dc4e" 00:04:59.186 } 00:04:59.186 } 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:59.444 request: 00:04:59.444 { 00:04:59.444 "method": "env_dpdk_get_mem_stats", 00:04:59.444 "req_id": 1 00:04:59.444 } 00:04:59.444 Got JSON-RPC error response 00:04:59.444 response: 00:04:59.444 { 00:04:59.444 "code": -32601, 00:04:59.444 "message": "Method not found" 00:04:59.444 } 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.444 15:14:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1980745 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1980745 ']' 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1980745 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.444 15:14:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1980745 00:04:59.704 15:14:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.704 15:14:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.704 15:14:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1980745' 00:04:59.704 killing process with pid 1980745 00:04:59.704 15:14:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 1980745 00:04:59.704 15:14:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 1980745 00:04:59.963 00:04:59.963 real 0m1.348s 00:04:59.963 user 0m1.554s 00:04:59.963 sys 0m0.463s 00:04:59.963 15:14:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.963 15:14:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:59.963 ************************************ 00:04:59.963 END TEST app_cmdline 00:04:59.963 ************************************ 00:04:59.963 15:14:03 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:59.963 15:14:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.963 15:14:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.963 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:04:59.963 ************************************ 00:04:59.963 START TEST version 00:04:59.963 ************************************ 00:04:59.963 15:14:03 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:59.963 * Looking for test storage... 00:04:59.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:59.963 15:14:03 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.963 15:14:03 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.963 15:14:03 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.223 15:14:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.223 15:14:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.223 15:14:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.223 15:14:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.223 15:14:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.223 15:14:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.223 15:14:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.223 15:14:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.223 15:14:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.223 15:14:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.223 15:14:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.223 15:14:03 version -- scripts/common.sh@344 -- # case "$op" in 00:05:00.223 15:14:03 version -- scripts/common.sh@345 -- # : 1 00:05:00.223 15:14:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.223 15:14:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.223 15:14:03 version -- scripts/common.sh@365 -- # decimal 1 00:05:00.223 15:14:03 version -- scripts/common.sh@353 -- # local d=1 00:05:00.223 15:14:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.223 15:14:03 version -- scripts/common.sh@355 -- # echo 1 00:05:00.223 15:14:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.223 15:14:03 version -- scripts/common.sh@366 -- # decimal 2 00:05:00.223 15:14:03 version -- scripts/common.sh@353 -- # local d=2 00:05:00.223 15:14:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.223 15:14:03 version -- scripts/common.sh@355 -- # echo 2 00:05:00.223 15:14:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.223 15:14:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.223 15:14:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.223 15:14:03 version -- scripts/common.sh@368 -- # return 0 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.223 --rc genhtml_branch_coverage=1 00:05:00.223 --rc genhtml_function_coverage=1 00:05:00.223 --rc genhtml_legend=1 00:05:00.223 --rc geninfo_all_blocks=1 00:05:00.223 --rc geninfo_unexecuted_blocks=1 00:05:00.223 00:05:00.223 ' 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.223 --rc genhtml_branch_coverage=1 00:05:00.223 --rc genhtml_function_coverage=1 00:05:00.223 --rc genhtml_legend=1 00:05:00.223 --rc geninfo_all_blocks=1 00:05:00.223 --rc geninfo_unexecuted_blocks=1 00:05:00.223 00:05:00.223 ' 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.223 --rc genhtml_branch_coverage=1 00:05:00.223 --rc genhtml_function_coverage=1 00:05:00.223 --rc genhtml_legend=1 00:05:00.223 --rc geninfo_all_blocks=1 00:05:00.223 --rc geninfo_unexecuted_blocks=1 00:05:00.223 00:05:00.223 ' 00:05:00.223 15:14:03 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.223 --rc genhtml_branch_coverage=1 00:05:00.223 --rc genhtml_function_coverage=1 00:05:00.223 --rc genhtml_legend=1 00:05:00.223 --rc geninfo_all_blocks=1 00:05:00.223 --rc geninfo_unexecuted_blocks=1 00:05:00.223 00:05:00.223 ' 00:05:00.223 15:14:03 version -- app/version.sh@17 -- # get_header_version major 00:05:00.223 15:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # cut -f2 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:00.223 15:14:03 version -- app/version.sh@17 -- # major=25 00:05:00.223 15:14:03 version -- app/version.sh@18 -- # get_header_version minor 00:05:00.223 15:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # cut -f2 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:00.223 15:14:03 version -- app/version.sh@18 -- # minor=1 00:05:00.223 15:14:03 version -- app/version.sh@19 -- # get_header_version patch 00:05:00.223 15:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # cut -f2 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:00.223 15:14:03 version -- app/version.sh@19 -- # patch=0 00:05:00.223 15:14:03 version -- app/version.sh@20 -- # get_header_version suffix 00:05:00.223 15:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # cut -f2 00:05:00.223 15:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:00.223 15:14:03 version -- app/version.sh@20 -- # suffix=-pre 00:05:00.223 15:14:03 version -- app/version.sh@22 -- # version=25.1 00:05:00.223 15:14:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:00.223 15:14:03 version -- app/version.sh@28 -- # version=25.1rc0 00:05:00.224 15:14:03 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:00.224 15:14:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:00.224 15:14:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:00.224 15:14:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:00.224 00:05:00.224 real 0m0.250s 00:05:00.224 user 0m0.171s 00:05:00.224 sys 0m0.122s 00:05:00.224 15:14:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.224 15:14:04 version -- common/autotest_common.sh@10 -- # set +x 00:05:00.224 ************************************ 00:05:00.224 END TEST version 00:05:00.224 ************************************ 00:05:00.224 15:14:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:00.224 15:14:04 -- spdk/autotest.sh@194 -- # uname -s 00:05:00.224 15:14:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:00.224 15:14:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:00.224 15:14:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:00.224 15:14:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:00.224 15:14:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.224 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.224 15:14:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:00.224 15:14:04 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:00.224 15:14:04 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:00.224 15:14:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:00.224 15:14:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.224 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.224 ************************************ 00:05:00.224 START TEST nvmf_tcp 00:05:00.224 ************************************ 00:05:00.224 15:14:04 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:00.483 * Looking for test storage... 00:05:00.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.483 15:14:04 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.483 --rc genhtml_branch_coverage=1 00:05:00.483 --rc genhtml_function_coverage=1 00:05:00.483 --rc genhtml_legend=1 00:05:00.483 --rc geninfo_all_blocks=1 00:05:00.483 --rc geninfo_unexecuted_blocks=1 00:05:00.483 00:05:00.483 ' 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.483 --rc genhtml_branch_coverage=1 00:05:00.483 --rc genhtml_function_coverage=1 00:05:00.483 --rc genhtml_legend=1 00:05:00.483 --rc geninfo_all_blocks=1 00:05:00.483 --rc geninfo_unexecuted_blocks=1 00:05:00.483 00:05:00.483 ' 00:05:00.483 15:14:04 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.483 --rc genhtml_branch_coverage=1 00:05:00.483 --rc genhtml_function_coverage=1 00:05:00.483 --rc genhtml_legend=1 00:05:00.483 --rc geninfo_all_blocks=1 00:05:00.483 --rc geninfo_unexecuted_blocks=1 00:05:00.484 00:05:00.484 ' 00:05:00.484 15:14:04 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.484 --rc genhtml_branch_coverage=1 00:05:00.484 --rc genhtml_function_coverage=1 00:05:00.484 --rc genhtml_legend=1 00:05:00.484 --rc geninfo_all_blocks=1 00:05:00.484 --rc geninfo_unexecuted_blocks=1 00:05:00.484 00:05:00.484 ' 00:05:00.484 15:14:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:00.484 15:14:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:00.484 15:14:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:00.484 15:14:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:00.484 15:14:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.484 15:14:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.484 ************************************ 00:05:00.484 START TEST nvmf_target_core 00:05:00.484 ************************************ 00:05:00.484 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:00.744 * Looking for test storage... 00:05:00.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.744 --rc genhtml_branch_coverage=1 00:05:00.744 --rc genhtml_function_coverage=1 00:05:00.744 --rc genhtml_legend=1 00:05:00.744 --rc geninfo_all_blocks=1 00:05:00.744 --rc geninfo_unexecuted_blocks=1 00:05:00.744 00:05:00.744 ' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:00.744 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:00.745 ************************************ 00:05:00.745 START TEST nvmf_abort 00:05:00.745 ************************************ 00:05:00.745 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:01.005 * Looking for test storage... 00:05:01.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.005 --rc genhtml_branch_coverage=1 00:05:01.005 --rc genhtml_function_coverage=1 00:05:01.005 --rc genhtml_legend=1 00:05:01.005 --rc geninfo_all_blocks=1 00:05:01.005 --rc geninfo_unexecuted_blocks=1 00:05:01.005 00:05:01.005 ' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.005 --rc genhtml_branch_coverage=1 00:05:01.005 --rc genhtml_function_coverage=1 00:05:01.005 --rc genhtml_legend=1 00:05:01.005 --rc geninfo_all_blocks=1 00:05:01.005 --rc geninfo_unexecuted_blocks=1 00:05:01.005 00:05:01.005 ' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.005 --rc genhtml_branch_coverage=1 00:05:01.005 --rc genhtml_function_coverage=1 00:05:01.005 --rc genhtml_legend=1 00:05:01.005 --rc geninfo_all_blocks=1 00:05:01.005 --rc geninfo_unexecuted_blocks=1 00:05:01.005 00:05:01.005 ' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.005 --rc genhtml_branch_coverage=1 00:05:01.005 --rc genhtml_function_coverage=1 00:05:01.005 --rc genhtml_legend=1 00:05:01.005 --rc geninfo_all_blocks=1 00:05:01.005 --rc geninfo_unexecuted_blocks=1 00:05:01.005 00:05:01.005 ' 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.005 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:01.006 15:14:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:07.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:07.578 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:07.579 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:07.579 Found net devices under 0000:86:00.0: cvl_0_0 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:07.579 Found net devices under 0000:86:00.1: cvl_0_1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:07.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:07.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:05:07.579 00:05:07.579 --- 10.0.0.2 ping statistics --- 00:05:07.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:07.579 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:07.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:07.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:05:07.579 00:05:07.579 --- 10.0.0.1 ping statistics --- 00:05:07.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:07.579 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1984331 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1984331 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1984331 ']' 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.579 15:14:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.579 [2024-11-20 15:14:10.818196] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:07.579 [2024-11-20 15:14:10.818248] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:07.579 [2024-11-20 15:14:10.884858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.579 [2024-11-20 15:14:10.932399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:07.579 [2024-11-20 15:14:10.932436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:07.579 [2024-11-20 15:14:10.932443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.579 [2024-11-20 15:14:10.932449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.579 [2024-11-20 15:14:10.932454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:07.579 [2024-11-20 15:14:10.933754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.579 [2024-11-20 15:14:10.935963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.579 [2024-11-20 15:14:10.935967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:07.579 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 [2024-11-20 15:14:11.080530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 Malloc0 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 Delay0 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 [2024-11-20 15:14:11.144528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.580 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:07.580 [2024-11-20 15:14:11.269254] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:09.608 Initializing NVMe Controllers 00:05:09.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:09.608 controller IO queue size 128 less than required 00:05:09.608 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:09.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:09.608 Initialization complete. Launching workers. 00:05:09.608 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36823 00:05:09.608 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36888, failed to submit 62 00:05:09.608 success 36827, unsuccessful 61, failed 0 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:09.608 rmmod nvme_tcp 00:05:09.608 rmmod nvme_fabrics 00:05:09.608 rmmod nvme_keyring 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1984331 ']' 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1984331 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1984331 ']' 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1984331 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984331 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984331' 00:05:09.608 killing process with pid 1984331 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1984331 00:05:09.608 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1984331 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.868 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:12.407 00:05:12.407 real 0m11.134s 00:05:12.407 user 0m11.510s 00:05:12.407 sys 0m5.422s 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.407 ************************************ 00:05:12.407 END TEST nvmf_abort 00:05:12.407 ************************************ 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:12.407 ************************************ 00:05:12.407 START TEST nvmf_ns_hotplug_stress 00:05:12.407 ************************************ 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:12.407 * Looking for test storage... 00:05:12.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.407 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.408 --rc genhtml_branch_coverage=1 00:05:12.408 --rc genhtml_function_coverage=1 00:05:12.408 --rc genhtml_legend=1 00:05:12.408 --rc geninfo_all_blocks=1 00:05:12.408 --rc geninfo_unexecuted_blocks=1 00:05:12.408 00:05:12.408 ' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.408 --rc genhtml_branch_coverage=1 00:05:12.408 --rc genhtml_function_coverage=1 00:05:12.408 --rc genhtml_legend=1 00:05:12.408 --rc geninfo_all_blocks=1 00:05:12.408 --rc geninfo_unexecuted_blocks=1 00:05:12.408 00:05:12.408 ' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.408 --rc genhtml_branch_coverage=1 00:05:12.408 --rc genhtml_function_coverage=1 00:05:12.408 --rc genhtml_legend=1 00:05:12.408 --rc geninfo_all_blocks=1 00:05:12.408 --rc geninfo_unexecuted_blocks=1 00:05:12.408 00:05:12.408 ' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.408 --rc genhtml_branch_coverage=1 00:05:12.408 --rc genhtml_function_coverage=1 00:05:12.408 --rc genhtml_legend=1 00:05:12.408 --rc geninfo_all_blocks=1 00:05:12.408 --rc geninfo_unexecuted_blocks=1 00:05:12.408 00:05:12.408 ' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:12.408 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:18.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:18.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:18.983 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:18.984 Found net devices under 0000:86:00.0: cvl_0_0 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:18.984 Found net devices under 0000:86:00.1: cvl_0_1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:18.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:18.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:05:18.984 00:05:18.984 --- 10.0.0.2 ping statistics --- 00:05:18.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:18.984 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:18.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:18.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:05:18.984 00:05:18.984 --- 10.0.0.1 ping statistics --- 00:05:18.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:18.984 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1988391 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1988391 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1988391 ']' 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.984 15:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.984 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:18.984 [2024-11-20 15:14:22.049598] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:05:18.984 [2024-11-20 15:14:22.049646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:18.984 [2024-11-20 15:14:22.132995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.984 [2024-11-20 15:14:22.177248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:18.984 [2024-11-20 15:14:22.177284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:18.984 [2024-11-20 15:14:22.177292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.984 [2024-11-20 15:14:22.177298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.984 [2024-11-20 15:14:22.177303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:18.984 [2024-11-20 15:14:22.178682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.984 [2024-11-20 15:14:22.178788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.984 [2024-11-20 15:14:22.178789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:19.244 15:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:19.244 [2024-11-20 15:14:23.093521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.244 15:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:19.504 15:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:19.762 [2024-11-20 15:14:23.495010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:19.762 15:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:20.021 15:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:20.021 Malloc0 00:05:20.280 15:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:20.280 Delay0 00:05:20.280 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.539 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:20.798 NULL1 00:05:20.798 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:21.057 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:21.057 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1988869 00:05:21.057 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:21.057 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.058 15:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.316 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:21.316 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:21.575 true 00:05:21.575 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:21.575 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.834 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.093 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:22.093 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:22.093 true 00:05:22.093 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:22.093 15:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.352 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.611 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:22.611 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:22.870 true 00:05:22.870 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:22.870 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.129 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.388 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:23.388 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:23.388 true 00:05:23.388 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:23.388 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.648 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.907 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:23.907 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:24.166 true 00:05:24.166 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:24.166 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.426 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.685 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:24.685 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:24.685 true 00:05:24.685 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:24.685 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.944 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.203 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:25.203 15:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:25.462 true 00:05:25.462 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:25.462 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.721 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.721 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:25.721 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:25.981 true 00:05:25.981 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:25.981 15:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.279 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.539 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:26.539 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:26.539 true 00:05:26.798 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:26.798 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.798 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.057 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:27.057 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:27.316 true 00:05:27.316 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:27.316 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.575 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.833 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:27.833 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:27.833 true 00:05:27.833 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:27.833 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.093 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.352 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:28.352 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:28.611 true 00:05:28.611 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:28.611 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.871 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.871 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:28.871 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:29.130 true 00:05:29.130 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:29.130 15:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.389 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.648 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:29.648 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:29.907 true 00:05:29.907 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:29.907 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.166 15:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.166 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:30.166 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:30.425 true 00:05:30.425 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:30.425 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.685 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.944 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:30.944 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:31.203 true 00:05:31.203 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:31.203 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.463 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.463 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:31.463 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:31.723 true 00:05:31.723 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:31.723 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.982 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.241 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:32.241 15:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:32.500 true 00:05:32.500 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:32.500 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.500 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.759 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:32.759 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:33.019 true 00:05:33.019 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:33.019 15:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.277 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.536 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:33.536 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:33.536 true 00:05:33.796 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:33.796 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.796 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.054 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:34.054 15:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:34.313 true 00:05:34.313 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:34.313 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.572 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.830 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:34.830 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:34.830 true 00:05:34.830 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:34.830 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.089 15:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.347 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:35.347 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:35.606 true 00:05:35.606 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:35.606 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.865 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.125 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:36.125 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:36.125 true 00:05:36.125 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:36.125 15:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.383 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.645 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:36.645 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:36.906 true 00:05:36.906 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:36.906 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.164 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.164 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:37.164 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:37.422 true 00:05:37.422 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:37.422 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.680 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.938 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:37.938 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:38.197 true 00:05:38.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:38.197 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.456 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.456 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:38.456 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:38.715 true 00:05:38.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:38.715 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.973 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.231 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:39.231 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:39.489 true 00:05:39.489 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:39.489 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.489 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.746 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:39.746 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:40.042 true 00:05:40.042 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:40.042 15:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.299 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.558 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:40.558 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:40.558 true 00:05:40.558 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:40.558 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.816 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.073 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:41.073 15:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:41.331 true 00:05:41.331 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:41.331 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.590 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.590 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:41.849 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:41.849 true 00:05:41.849 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:41.849 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.107 15:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.365 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:42.365 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:42.625 true 00:05:42.625 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:42.625 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.883 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.883 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:42.883 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:43.142 true 00:05:43.142 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:43.142 15:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.400 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.659 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:43.659 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:43.918 true 00:05:43.918 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:43.918 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.177 15:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.177 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:44.177 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:44.436 true 00:05:44.436 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:44.436 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.694 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.953 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:44.953 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:45.212 true 00:05:45.212 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:45.212 15:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.471 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.471 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:45.471 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:45.730 true 00:05:45.730 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:45.730 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.988 15:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.247 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:46.247 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:46.506 true 00:05:46.506 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:46.506 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.764 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.764 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:46.765 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:47.024 true 00:05:47.024 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:47.024 15:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.282 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.541 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:47.541 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:47.800 true 00:05:47.800 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:47.800 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.800 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.059 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:48.059 15:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:48.317 true 00:05:48.317 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:48.317 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.576 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.835 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:48.835 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:49.094 true 00:05:49.094 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:49.094 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.353 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.353 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:49.353 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:49.612 true 00:05:49.612 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:49.612 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.936 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.239 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:50.239 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:50.239 true 00:05:50.239 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:50.239 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.531 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.790 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:50.790 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:51.049 true 00:05:51.049 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:51.049 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.308 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.308 Initializing NVMe Controllers 00:05:51.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.308 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:05:51.308 Controller IO queue size 128, less than required. 00:05:51.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.308 WARNING: Some requested NVMe devices were skipped 00:05:51.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:51.308 Initialization complete. Launching workers. 00:05:51.308 ======================================================== 00:05:51.308 Latency(us) 00:05:51.308 Device Information : IOPS MiB/s Average min max 00:05:51.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26597.00 12.99 4812.48 2362.63 8961.86 00:05:51.308 ======================================================== 00:05:51.308 Total : 26597.00 12.99 4812.48 2362.63 8961.86 00:05:51.308 00:05:51.308 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:51.308 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:51.567 true 00:05:51.567 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1988869 00:05:51.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1988869) - No such process 00:05:51.567 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1988869 00:05:51.567 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.825 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:52.084 null0 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.084 15:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:52.342 null1 00:05:52.342 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.342 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.342 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:52.601 null2 00:05:52.601 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.601 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.601 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:52.861 null3 00:05:52.861 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.861 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.861 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:52.861 null4 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:53.120 null5 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.120 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:53.379 null6 00:05:53.379 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.379 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.379 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:53.638 null7 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.638 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1994506 1994507 1994509 1994511 1994513 1994515 1994517 1994519 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.639 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.899 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.158 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.159 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.159 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.159 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.159 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.159 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.419 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.678 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.938 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.198 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.198 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.198 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.198 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.198 15:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.198 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.459 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.717 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.717 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.717 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.717 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.717 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.718 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.978 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.238 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.239 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.239 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.239 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.239 15:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.239 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.497 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.497 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.497 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.498 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.756 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.756 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.756 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.756 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.756 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.757 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.757 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.757 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.015 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.274 15:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.274 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.534 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:57.793 rmmod nvme_tcp 00:05:57.793 rmmod nvme_fabrics 00:05:57.793 rmmod nvme_keyring 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1988391 ']' 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1988391 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1988391 ']' 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1988391 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.793 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988391 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988391' 00:05:58.052 killing process with pid 1988391 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1988391 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1988391 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.052 15:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:00.588 00:06:00.588 real 0m48.156s 00:06:00.588 user 3m24.527s 00:06:00.588 sys 0m17.426s 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.588 ************************************ 00:06:00.588 END TEST nvmf_ns_hotplug_stress 00:06:00.588 ************************************ 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.588 15:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:00.588 ************************************ 00:06:00.588 START TEST nvmf_delete_subsystem 00:06:00.588 ************************************ 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.588 * Looking for test storage... 00:06:00.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.588 --rc genhtml_branch_coverage=1 00:06:00.588 --rc genhtml_function_coverage=1 00:06:00.588 --rc genhtml_legend=1 00:06:00.588 --rc geninfo_all_blocks=1 00:06:00.588 --rc geninfo_unexecuted_blocks=1 00:06:00.588 00:06:00.588 ' 00:06:00.588 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.588 --rc genhtml_branch_coverage=1 00:06:00.588 --rc genhtml_function_coverage=1 00:06:00.588 --rc genhtml_legend=1 00:06:00.588 --rc geninfo_all_blocks=1 00:06:00.588 --rc geninfo_unexecuted_blocks=1 00:06:00.588 00:06:00.589 ' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.589 --rc genhtml_branch_coverage=1 00:06:00.589 --rc genhtml_function_coverage=1 00:06:00.589 --rc genhtml_legend=1 00:06:00.589 --rc geninfo_all_blocks=1 00:06:00.589 --rc geninfo_unexecuted_blocks=1 00:06:00.589 00:06:00.589 ' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.589 --rc genhtml_branch_coverage=1 00:06:00.589 --rc genhtml_function_coverage=1 00:06:00.589 --rc genhtml_legend=1 00:06:00.589 --rc geninfo_all_blocks=1 00:06:00.589 --rc geninfo_unexecuted_blocks=1 00:06:00.589 00:06:00.589 ' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:00.589 15:15:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:07.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:07.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:07.165 Found net devices under 0000:86:00.0: cvl_0_0 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:07.165 Found net devices under 0000:86:00.1: cvl_0_1 00:06:07.165 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.166 15:15:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:07.166 00:06:07.166 --- 10.0.0.2 ping statistics --- 00:06:07.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.166 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:06:07.166 00:06:07.166 --- 10.0.0.1 ping statistics --- 00:06:07.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.166 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1999526 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1999526 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1999526 ']' 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 [2024-11-20 15:15:10.281767] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:07.166 [2024-11-20 15:15:10.281814] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.166 [2024-11-20 15:15:10.347554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.166 [2024-11-20 15:15:10.390187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.166 [2024-11-20 15:15:10.390224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.166 [2024-11-20 15:15:10.390231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.166 [2024-11-20 15:15:10.390237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.166 [2024-11-20 15:15:10.390242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.166 [2024-11-20 15:15:10.391386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.166 [2024-11-20 15:15:10.391389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 [2024-11-20 15:15:10.534984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 [2024-11-20 15:15:10.555177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 NULL1 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.166 Delay0 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.166 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1999656 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:07.167 15:15:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:07.167 [2024-11-20 15:15:10.666108] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:09.072 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:09.072 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.072 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 starting I/O failed: -6 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 starting I/O failed: -6 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 starting I/O failed: -6 00:06:09.072 Write completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Write completed with error (sct=0, sc=8) 00:06:09.072 starting I/O failed: -6 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 starting I/O failed: -6 00:06:09.072 Write completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.072 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 [2024-11-20 15:15:12.781680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e4a0 is same with the state(6) to be set 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 starting I/O failed: -6 00:06:09.073 [2024-11-20 15:15:12.786650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7218000c40 is same with the state(6) to be set 00:06:09.073 starting I/O failed: -6 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Read completed with error (sct=0, sc=8) 00:06:09.073 Write completed with error (sct=0, sc=8) 00:06:09.074 [2024-11-20 15:15:12.787206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f721800d350 is same with the state(6) to be set 00:06:10.011 [2024-11-20 15:15:13.759997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9f9a0 is same with the state(6) to be set 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 [2024-11-20 15:15:13.784817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e2c0 is same with the state(6) to be set 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 [2024-11-20 15:15:13.785117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e860 is same with the state(6) to be set 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 [2024-11-20 15:15:13.789334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f721800d680 is same with the state(6) to be set 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Write completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 Read completed with error (sct=0, sc=8) 00:06:10.011 [2024-11-20 15:15:13.790472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f721800d020 is same with the state(6) to be set 00:06:10.011 Initializing NVMe Controllers 00:06:10.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:10.011 Controller IO queue size 128, less than required. 00:06:10.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:10.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:10.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:10.011 Initialization complete. Launching workers. 00:06:10.011 ======================================================== 00:06:10.011 Latency(us) 00:06:10.011 Device Information : IOPS MiB/s Average min max 00:06:10.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.77 0.08 911377.01 298.62 1006257.54 00:06:10.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.82 0.07 939913.64 391.79 1011186.08 00:06:10.011 ======================================================== 00:06:10.011 Total : 315.59 0.15 925195.22 298.62 1011186.08 00:06:10.011 00:06:10.011 [2024-11-20 15:15:13.791048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9f9a0 (9): Bad file descriptor 00:06:10.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:10.011 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.011 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:10.011 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1999656 00:06:10.011 15:15:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1999656 00:06:10.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1999656) - No such process 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1999656 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1999656 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1999656 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.580 [2024-11-20 15:15:14.317972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2000219 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:10.580 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.580 [2024-11-20 15:15:14.408073] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:11.148 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.148 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:11.148 15:15:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.716 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.716 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:11.716 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.975 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.975 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:11.975 15:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.545 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.545 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:12.545 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.113 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.113 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:13.113 15:15:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.680 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.680 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:13.680 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.680 Initializing NVMe Controllers 00:06:13.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:13.680 Controller IO queue size 128, less than required. 00:06:13.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:13.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:13.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:13.680 Initialization complete. Launching workers. 00:06:13.680 ======================================================== 00:06:13.680 Latency(us) 00:06:13.680 Device Information : IOPS MiB/s Average min max 00:06:13.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002159.38 1000145.22 1041396.05 00:06:13.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003943.69 1000147.99 1009845.58 00:06:13.680 ======================================================== 00:06:13.681 Total : 256.00 0.12 1003051.54 1000145.22 1041396.05 00:06:13.681 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000219 00:06:14.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2000219) - No such process 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2000219 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:14.248 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:14.249 rmmod nvme_tcp 00:06:14.249 rmmod nvme_fabrics 00:06:14.249 rmmod nvme_keyring 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1999526 ']' 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1999526 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1999526 ']' 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1999526 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1999526 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1999526' 00:06:14.249 killing process with pid 1999526 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1999526 00:06:14.249 15:15:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1999526 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.249 15:15:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:16.788 00:06:16.788 real 0m16.201s 00:06:16.788 user 0m29.197s 00:06:16.788 sys 0m5.536s 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.788 ************************************ 00:06:16.788 END TEST nvmf_delete_subsystem 00:06:16.788 ************************************ 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.788 ************************************ 00:06:16.788 START TEST nvmf_host_management 00:06:16.788 ************************************ 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:16.788 * Looking for test storage... 00:06:16.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.788 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.789 --rc genhtml_branch_coverage=1 00:06:16.789 --rc genhtml_function_coverage=1 00:06:16.789 --rc genhtml_legend=1 00:06:16.789 --rc geninfo_all_blocks=1 00:06:16.789 --rc geninfo_unexecuted_blocks=1 00:06:16.789 00:06:16.789 ' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.789 --rc genhtml_branch_coverage=1 00:06:16.789 --rc genhtml_function_coverage=1 00:06:16.789 --rc genhtml_legend=1 00:06:16.789 --rc geninfo_all_blocks=1 00:06:16.789 --rc geninfo_unexecuted_blocks=1 00:06:16.789 00:06:16.789 ' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.789 --rc genhtml_branch_coverage=1 00:06:16.789 --rc genhtml_function_coverage=1 00:06:16.789 --rc genhtml_legend=1 00:06:16.789 --rc geninfo_all_blocks=1 00:06:16.789 --rc geninfo_unexecuted_blocks=1 00:06:16.789 00:06:16.789 ' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.789 --rc genhtml_branch_coverage=1 00:06:16.789 --rc genhtml_function_coverage=1 00:06:16.789 --rc genhtml_legend=1 00:06:16.789 --rc geninfo_all_blocks=1 00:06:16.789 --rc geninfo_unexecuted_blocks=1 00:06:16.789 00:06:16.789 ' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.789 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.790 15:15:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.362 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:23.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:23.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:23.363 Found net devices under 0000:86:00.0: cvl_0_0 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:23.363 Found net devices under 0000:86:00.1: cvl_0_1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:06:23.363 00:06:23.363 --- 10.0.0.2 ping statistics --- 00:06:23.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.363 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:06:23.363 00:06:23.363 --- 10.0.0.1 ping statistics --- 00:06:23.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.363 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.363 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2004359 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2004359 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2004359 ']' 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 [2024-11-20 15:15:26.589184] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:23.364 [2024-11-20 15:15:26.589229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.364 [2024-11-20 15:15:26.670258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.364 [2024-11-20 15:15:26.713464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.364 [2024-11-20 15:15:26.713503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.364 [2024-11-20 15:15:26.713510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.364 [2024-11-20 15:15:26.713517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.364 [2024-11-20 15:15:26.713522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.364 [2024-11-20 15:15:26.715208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.364 [2024-11-20 15:15:26.715338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.364 [2024-11-20 15:15:26.715464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.364 [2024-11-20 15:15:26.715465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 [2024-11-20 15:15:26.856812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 Malloc0 00:06:23.364 [2024-11-20 15:15:26.925332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2004530 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2004530 /var/tmp/bdevperf.sock 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2004530 ']' 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:23.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:23.364 { 00:06:23.364 "params": { 00:06:23.364 "name": "Nvme$subsystem", 00:06:23.364 "trtype": "$TEST_TRANSPORT", 00:06:23.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:23.364 "adrfam": "ipv4", 00:06:23.364 "trsvcid": "$NVMF_PORT", 00:06:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:23.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:23.364 "hdgst": ${hdgst:-false}, 00:06:23.364 "ddgst": ${ddgst:-false} 00:06:23.364 }, 00:06:23.364 "method": "bdev_nvme_attach_controller" 00:06:23.364 } 00:06:23.364 EOF 00:06:23.364 )") 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:23.364 15:15:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:23.364 "params": { 00:06:23.364 "name": "Nvme0", 00:06:23.364 "trtype": "tcp", 00:06:23.364 "traddr": "10.0.0.2", 00:06:23.364 "adrfam": "ipv4", 00:06:23.364 "trsvcid": "4420", 00:06:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:23.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:23.364 "hdgst": false, 00:06:23.364 "ddgst": false 00:06:23.364 }, 00:06:23.364 "method": "bdev_nvme_attach_controller" 00:06:23.364 }' 00:06:23.364 [2024-11-20 15:15:27.022838] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:23.364 [2024-11-20 15:15:27.022885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004530 ] 00:06:23.364 [2024-11-20 15:15:27.097604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.364 [2024-11-20 15:15:27.139106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.623 Running I/O for 10 seconds... 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:23.623 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:23.624 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:23.624 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:23.624 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.624 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.883 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.883 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=89 00:06:23.883 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 89 -ge 100 ']' 00:06:23.883 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.143 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.143 [2024-11-20 15:15:27.856418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.856532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648200 is same with the state(6) to be set 00:06:24.143 [2024-11-20 15:15:27.857676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.143 [2024-11-20 15:15:27.857710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.143 [2024-11-20 15:15:27.857720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.143 [2024-11-20 15:15:27.857728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.143 [2024-11-20 15:15:27.857735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.143 [2024-11-20 15:15:27.857743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.143 [2024-11-20 15:15:27.857756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.143 [2024-11-20 15:15:27.857762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.857769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223500 is same with the state(6) to be set 00:06:24.144 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.144 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:24.144 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.144 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.144 [2024-11-20 15:15:27.869774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.869994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.144 [2024-11-20 15:15:27.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.144 [2024-11-20 15:15:27.870261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.145 [2024-11-20 15:15:27.870755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.145 [2024-11-20 15:15:27.870781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:24.145 [2024-11-20 15:15:27.870859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1223500 (9): Bad file descriptor 00:06:24.145 [2024-11-20 15:15:27.871745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:24.146 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.146 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:24.146 00:06:24.146 Latency(us) 00:06:24.146 [2024-11-20T14:15:28.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:24.146 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:24.146 Verification LBA range: start 0x0 length 0x400 00:06:24.146 Nvme0n1 : 0.41 1881.26 117.58 156.77 0.00 30557.58 1417.57 27810.06 00:06:24.146 [2024-11-20T14:15:28.054Z] =================================================================================================================== 00:06:24.146 [2024-11-20T14:15:28.054Z] Total : 1881.26 117.58 156.77 0.00 30557.58 1417.57 27810.06 00:06:24.146 15:15:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:24.146 [2024-11-20 15:15:27.874131] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.146 [2024-11-20 15:15:27.880424] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2004530 00:06:25.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2004530) - No such process 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:25.083 { 00:06:25.083 "params": { 00:06:25.083 "name": "Nvme$subsystem", 00:06:25.083 "trtype": "$TEST_TRANSPORT", 00:06:25.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:25.083 "adrfam": "ipv4", 00:06:25.083 "trsvcid": "$NVMF_PORT", 00:06:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:25.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:25.083 "hdgst": ${hdgst:-false}, 00:06:25.083 "ddgst": ${ddgst:-false} 00:06:25.083 }, 00:06:25.083 "method": "bdev_nvme_attach_controller" 00:06:25.083 } 00:06:25.083 EOF 00:06:25.083 )") 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:25.083 15:15:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:25.083 "params": { 00:06:25.083 "name": "Nvme0", 00:06:25.083 "trtype": "tcp", 00:06:25.083 "traddr": "10.0.0.2", 00:06:25.083 "adrfam": "ipv4", 00:06:25.083 "trsvcid": "4420", 00:06:25.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:25.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:25.083 "hdgst": false, 00:06:25.083 "ddgst": false 00:06:25.083 }, 00:06:25.083 "method": "bdev_nvme_attach_controller" 00:06:25.083 }' 00:06:25.083 [2024-11-20 15:15:28.926629] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:25.083 [2024-11-20 15:15:28.926678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004867 ] 00:06:25.342 [2024-11-20 15:15:29.003447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.342 [2024-11-20 15:15:29.043215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.342 Running I/O for 1 seconds... 00:06:26.719 1953.00 IOPS, 122.06 MiB/s 00:06:26.719 Latency(us) 00:06:26.719 [2024-11-20T14:15:30.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:26.719 Verification LBA range: start 0x0 length 0x400 00:06:26.719 Nvme0n1 : 1.01 1991.03 124.44 0.00 0.00 31523.12 2322.25 27582.11 00:06:26.719 [2024-11-20T14:15:30.627Z] =================================================================================================================== 00:06:26.719 [2024-11-20T14:15:30.627Z] Total : 1991.03 124.44 0.00 0.00 31523.12 2322.25 27582.11 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.720 rmmod nvme_tcp 00:06:26.720 rmmod nvme_fabrics 00:06:26.720 rmmod nvme_keyring 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2004359 ']' 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2004359 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2004359 ']' 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2004359 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004359 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004359' 00:06:26.720 killing process with pid 2004359 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2004359 00:06:26.720 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2004359 00:06:26.979 [2024-11-20 15:15:30.695594] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.979 15:15:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.888 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:29.147 00:06:29.147 real 0m12.502s 00:06:29.147 user 0m19.930s 00:06:29.147 sys 0m5.670s 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.147 ************************************ 00:06:29.147 END TEST nvmf_host_management 00:06:29.147 ************************************ 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.147 ************************************ 00:06:29.147 START TEST nvmf_lvol 00:06:29.147 ************************************ 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:29.147 * Looking for test storage... 00:06:29.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.147 15:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.147 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.148 --rc genhtml_branch_coverage=1 00:06:29.148 --rc genhtml_function_coverage=1 00:06:29.148 --rc genhtml_legend=1 00:06:29.148 --rc geninfo_all_blocks=1 00:06:29.148 --rc geninfo_unexecuted_blocks=1 00:06:29.148 00:06:29.148 ' 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.148 --rc genhtml_branch_coverage=1 00:06:29.148 --rc genhtml_function_coverage=1 00:06:29.148 --rc genhtml_legend=1 00:06:29.148 --rc geninfo_all_blocks=1 00:06:29.148 --rc geninfo_unexecuted_blocks=1 00:06:29.148 00:06:29.148 ' 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.148 --rc genhtml_branch_coverage=1 00:06:29.148 --rc genhtml_function_coverage=1 00:06:29.148 --rc genhtml_legend=1 00:06:29.148 --rc geninfo_all_blocks=1 00:06:29.148 --rc geninfo_unexecuted_blocks=1 00:06:29.148 00:06:29.148 ' 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.148 --rc genhtml_branch_coverage=1 00:06:29.148 --rc genhtml_function_coverage=1 00:06:29.148 --rc genhtml_legend=1 00:06:29.148 --rc geninfo_all_blocks=1 00:06:29.148 --rc geninfo_unexecuted_blocks=1 00:06:29.148 00:06:29.148 ' 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.148 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.408 15:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.983 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.983 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:35.984 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:35.984 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:35.984 Found net devices under 0000:86:00.0: cvl_0_0 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.984 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:35.985 Found net devices under 0000:86:00.1: cvl_0_1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:06:35.985 00:06:35.985 --- 10.0.0.2 ping statistics --- 00:06:35.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.985 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:06:35.985 00:06:35.985 --- 10.0.0.1 ping statistics --- 00:06:35.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.985 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.985 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2008645 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2008645 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2008645 ']' 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.985 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.986 [2024-11-20 15:15:39.091066] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:35.986 [2024-11-20 15:15:39.091114] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.986 [2024-11-20 15:15:39.172369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.986 [2024-11-20 15:15:39.214779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.986 [2024-11-20 15:15:39.214814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.986 [2024-11-20 15:15:39.214822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.986 [2024-11-20 15:15:39.214828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.986 [2024-11-20 15:15:39.214833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.986 [2024-11-20 15:15:39.216245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.986 [2024-11-20 15:15:39.216352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.986 [2024-11-20 15:15:39.216353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:35.986 [2024-11-20 15:15:39.525850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:35.986 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:36.246 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:36.246 15:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:36.505 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:36.765 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c2e6ce0e-1639-4f11-a102-9396899c6460 00:06:36.765 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c2e6ce0e-1639-4f11-a102-9396899c6460 lvol 20 00:06:36.765 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=85e42f0c-233f-4967-8392-792e436b5fac 00:06:36.765 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:37.024 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85e42f0c-233f-4967-8392-792e436b5fac 00:06:37.284 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:37.543 [2024-11-20 15:15:41.195941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.543 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.543 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2009135 00:06:37.543 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:37.543 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:38.920 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 85e42f0c-233f-4967-8392-792e436b5fac MY_SNAPSHOT 00:06:38.920 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=733ed751-c885-4f35-ac74-5a87245aa307 00:06:38.920 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 85e42f0c-233f-4967-8392-792e436b5fac 30 00:06:39.178 15:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 733ed751-c885-4f35-ac74-5a87245aa307 MY_CLONE 00:06:39.436 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=622a86c9-32d7-4c58-a5fc-bf5256c7d225 00:06:39.436 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 622a86c9-32d7-4c58-a5fc-bf5256c7d225 00:06:40.005 15:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2009135 00:06:48.125 Initializing NVMe Controllers 00:06:48.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:48.125 Controller IO queue size 128, less than required. 00:06:48.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:48.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:48.125 Initialization complete. Launching workers. 00:06:48.125 ======================================================== 00:06:48.125 Latency(us) 00:06:48.125 Device Information : IOPS MiB/s Average min max 00:06:48.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12154.20 47.48 10537.13 1599.14 107055.70 00:06:48.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12073.10 47.16 10605.17 3542.83 47633.16 00:06:48.125 ======================================================== 00:06:48.125 Total : 24227.30 94.64 10571.04 1599.14 107055.70 00:06:48.125 00:06:48.125 15:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:48.422 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85e42f0c-233f-4967-8392-792e436b5fac 00:06:48.422 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2e6ce0e-1639-4f11-a102-9396899c6460 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:48.747 rmmod nvme_tcp 00:06:48.747 rmmod nvme_fabrics 00:06:48.747 rmmod nvme_keyring 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2008645 ']' 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2008645 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2008645 ']' 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2008645 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008645 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.747 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008645' 00:06:48.747 killing process with pid 2008645 00:06:48.748 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2008645 00:06:48.748 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2008645 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.036 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.573 00:06:51.573 real 0m22.029s 00:06:51.573 user 1m3.545s 00:06:51.573 sys 0m7.550s 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.573 ************************************ 00:06:51.573 END TEST nvmf_lvol 00:06:51.573 ************************************ 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.573 ************************************ 00:06:51.573 START TEST nvmf_lvs_grow 00:06:51.573 ************************************ 00:06:51.573 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:51.573 * Looking for test storage... 00:06:51.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.573 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.574 --rc genhtml_branch_coverage=1 00:06:51.574 --rc genhtml_function_coverage=1 00:06:51.574 --rc genhtml_legend=1 00:06:51.574 --rc geninfo_all_blocks=1 00:06:51.574 --rc geninfo_unexecuted_blocks=1 00:06:51.574 00:06:51.574 ' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.574 --rc genhtml_branch_coverage=1 00:06:51.574 --rc genhtml_function_coverage=1 00:06:51.574 --rc genhtml_legend=1 00:06:51.574 --rc geninfo_all_blocks=1 00:06:51.574 --rc geninfo_unexecuted_blocks=1 00:06:51.574 00:06:51.574 ' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.574 --rc genhtml_branch_coverage=1 00:06:51.574 --rc genhtml_function_coverage=1 00:06:51.574 --rc genhtml_legend=1 00:06:51.574 --rc geninfo_all_blocks=1 00:06:51.574 --rc geninfo_unexecuted_blocks=1 00:06:51.574 00:06:51.574 ' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.574 --rc genhtml_branch_coverage=1 00:06:51.574 --rc genhtml_function_coverage=1 00:06:51.574 --rc genhtml_legend=1 00:06:51.574 --rc geninfo_all_blocks=1 00:06:51.574 --rc geninfo_unexecuted_blocks=1 00:06:51.574 00:06:51.574 ' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:51.574 15:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.146 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.146 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.147 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.147 15:16:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:06:58.147 00:06:58.147 --- 10.0.0.2 ping statistics --- 00:06:58.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.147 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:58.147 00:06:58.147 --- 10.0.0.1 ping statistics --- 00:06:58.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.147 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2014531 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2014531 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2014531 ']' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.147 [2024-11-20 15:16:01.288755] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:58.147 [2024-11-20 15:16:01.288804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.147 [2024-11-20 15:16:01.369881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.147 [2024-11-20 15:16:01.411089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.147 [2024-11-20 15:16:01.411126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.147 [2024-11-20 15:16:01.411134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.147 [2024-11-20 15:16:01.411140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.147 [2024-11-20 15:16:01.411145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.147 [2024-11-20 15:16:01.411697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.147 [2024-11-20 15:16:01.711494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.147 ************************************ 00:06:58.147 START TEST lvs_grow_clean 00:06:58.147 ************************************ 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:58.147 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.148 15:16:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:58.148 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:58.148 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:58.407 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:06:58.407 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:06:58.407 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:58.666 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:58.666 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:58.666 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 lvol 150 00:06:58.924 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fc03bd0d-c2eb-49a1-9d34-590d2e61a757 00:06:58.924 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.924 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:58.924 [2024-11-20 15:16:02.750830] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:58.924 [2024-11-20 15:16:02.750878] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:58.924 true 00:06:58.924 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:06:58.924 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:59.183 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:59.183 15:16:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:59.442 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc03bd0d-c2eb-49a1-9d34-590d2e61a757 00:06:59.442 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:59.700 [2024-11-20 15:16:03.493056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.700 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2015026 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2015026 /var/tmp/bdevperf.sock 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2015026 ']' 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.959 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:59.959 [2024-11-20 15:16:03.731203] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:06:59.959 [2024-11-20 15:16:03.731249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015026 ] 00:06:59.959 [2024-11-20 15:16:03.806476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.959 [2024-11-20 15:16:03.847380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.218 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.218 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:00.218 15:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:00.476 Nvme0n1 00:07:00.476 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:00.735 [ 00:07:00.735 { 00:07:00.735 "name": "Nvme0n1", 00:07:00.735 "aliases": [ 00:07:00.735 "fc03bd0d-c2eb-49a1-9d34-590d2e61a757" 00:07:00.735 ], 00:07:00.735 "product_name": "NVMe disk", 00:07:00.735 "block_size": 4096, 00:07:00.735 "num_blocks": 38912, 00:07:00.735 "uuid": "fc03bd0d-c2eb-49a1-9d34-590d2e61a757", 00:07:00.735 "numa_id": 1, 00:07:00.735 "assigned_rate_limits": { 00:07:00.735 "rw_ios_per_sec": 0, 00:07:00.735 "rw_mbytes_per_sec": 0, 00:07:00.735 "r_mbytes_per_sec": 0, 00:07:00.735 "w_mbytes_per_sec": 0 00:07:00.735 }, 00:07:00.735 "claimed": false, 00:07:00.735 "zoned": false, 00:07:00.735 "supported_io_types": { 00:07:00.735 "read": true, 00:07:00.735 "write": true, 00:07:00.735 "unmap": true, 00:07:00.735 "flush": true, 00:07:00.736 "reset": true, 00:07:00.736 "nvme_admin": true, 00:07:00.736 "nvme_io": true, 00:07:00.736 "nvme_io_md": false, 00:07:00.736 "write_zeroes": true, 00:07:00.736 "zcopy": false, 00:07:00.736 "get_zone_info": false, 00:07:00.736 "zone_management": false, 00:07:00.736 "zone_append": false, 00:07:00.736 "compare": true, 00:07:00.736 "compare_and_write": true, 00:07:00.736 "abort": true, 00:07:00.736 "seek_hole": false, 00:07:00.736 "seek_data": false, 00:07:00.736 "copy": true, 00:07:00.736 "nvme_iov_md": false 00:07:00.736 }, 00:07:00.736 "memory_domains": [ 00:07:00.736 { 00:07:00.736 "dma_device_id": "system", 00:07:00.736 "dma_device_type": 1 00:07:00.736 } 00:07:00.736 ], 00:07:00.736 "driver_specific": { 00:07:00.736 "nvme": [ 00:07:00.736 { 00:07:00.736 "trid": { 00:07:00.736 "trtype": "TCP", 00:07:00.736 "adrfam": "IPv4", 00:07:00.736 "traddr": "10.0.0.2", 00:07:00.736 "trsvcid": "4420", 00:07:00.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:00.736 }, 00:07:00.736 "ctrlr_data": { 00:07:00.736 "cntlid": 1, 00:07:00.736 "vendor_id": "0x8086", 00:07:00.736 "model_number": "SPDK bdev Controller", 00:07:00.736 "serial_number": "SPDK0", 00:07:00.736 "firmware_revision": "25.01", 00:07:00.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.736 "oacs": { 00:07:00.736 "security": 0, 00:07:00.736 "format": 0, 00:07:00.736 "firmware": 0, 00:07:00.736 "ns_manage": 0 00:07:00.736 }, 00:07:00.736 "multi_ctrlr": true, 00:07:00.736 "ana_reporting": false 00:07:00.736 }, 00:07:00.736 "vs": { 00:07:00.736 "nvme_version": "1.3" 00:07:00.736 }, 00:07:00.736 "ns_data": { 00:07:00.736 "id": 1, 00:07:00.736 "can_share": true 00:07:00.736 } 00:07:00.736 } 00:07:00.736 ], 00:07:00.736 "mp_policy": "active_passive" 00:07:00.736 } 00:07:00.736 } 00:07:00.736 ] 00:07:00.736 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2015177 00:07:00.736 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:00.736 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:00.995 Running I/O for 10 seconds... 00:07:01.933 Latency(us) 00:07:01.933 [2024-11-20T14:16:05.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.933 Nvme0n1 : 1.00 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:01.933 [2024-11-20T14:16:05.841Z] =================================================================================================================== 00:07:01.933 [2024-11-20T14:16:05.841Z] Total : 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:01.933 00:07:02.868 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:02.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.868 Nvme0n1 : 2.00 22773.00 88.96 0.00 0.00 0.00 0.00 0.00 00:07:02.868 [2024-11-20T14:16:06.776Z] =================================================================================================================== 00:07:02.868 [2024-11-20T14:16:06.776Z] Total : 22773.00 88.96 0.00 0.00 0.00 0.00 0.00 00:07:02.868 00:07:02.868 true 00:07:03.126 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:03.126 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:03.126 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:03.126 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:03.126 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2015177 00:07:04.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.060 Nvme0n1 : 3.00 22782.33 88.99 0.00 0.00 0.00 0.00 0.00 00:07:04.060 [2024-11-20T14:16:07.968Z] =================================================================================================================== 00:07:04.060 [2024-11-20T14:16:07.968Z] Total : 22782.33 88.99 0.00 0.00 0.00 0.00 0.00 00:07:04.060 00:07:04.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.996 Nvme0n1 : 4.00 22869.00 89.33 0.00 0.00 0.00 0.00 0.00 00:07:04.996 [2024-11-20T14:16:08.904Z] =================================================================================================================== 00:07:04.996 [2024-11-20T14:16:08.904Z] Total : 22869.00 89.33 0.00 0.00 0.00 0.00 0.00 00:07:04.996 00:07:05.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.932 Nvme0n1 : 5.00 22914.20 89.51 0.00 0.00 0.00 0.00 0.00 00:07:05.932 [2024-11-20T14:16:09.840Z] =================================================================================================================== 00:07:05.932 [2024-11-20T14:16:09.840Z] Total : 22914.20 89.51 0.00 0.00 0.00 0.00 0.00 00:07:05.932 00:07:06.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.869 Nvme0n1 : 6.00 22896.00 89.44 0.00 0.00 0.00 0.00 0.00 00:07:06.869 [2024-11-20T14:16:10.777Z] =================================================================================================================== 00:07:06.869 [2024-11-20T14:16:10.777Z] Total : 22896.00 89.44 0.00 0.00 0.00 0.00 0.00 00:07:06.869 00:07:07.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.805 Nvme0n1 : 7.00 22918.86 89.53 0.00 0.00 0.00 0.00 0.00 00:07:07.805 [2024-11-20T14:16:11.713Z] =================================================================================================================== 00:07:07.805 [2024-11-20T14:16:11.713Z] Total : 22918.86 89.53 0.00 0.00 0.00 0.00 0.00 00:07:07.805 00:07:09.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.182 Nvme0n1 : 8.00 22955.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:09.182 [2024-11-20T14:16:13.090Z] =================================================================================================================== 00:07:09.182 [2024-11-20T14:16:13.090Z] Total : 22955.00 89.67 0.00 0.00 0.00 0.00 0.00 00:07:09.182 00:07:10.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.118 Nvme0n1 : 9.00 22979.78 89.76 0.00 0.00 0.00 0.00 0.00 00:07:10.118 [2024-11-20T14:16:14.026Z] =================================================================================================================== 00:07:10.118 [2024-11-20T14:16:14.026Z] Total : 22979.78 89.76 0.00 0.00 0.00 0.00 0.00 00:07:10.118 00:07:11.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.079 Nvme0n1 : 10.00 23000.50 89.85 0.00 0.00 0.00 0.00 0.00 00:07:11.079 [2024-11-20T14:16:14.987Z] =================================================================================================================== 00:07:11.079 [2024-11-20T14:16:14.987Z] Total : 23000.50 89.85 0.00 0.00 0.00 0.00 0.00 00:07:11.079 00:07:11.079 00:07:11.079 Latency(us) 00:07:11.079 [2024-11-20T14:16:14.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.079 Nvme0n1 : 10.00 23000.32 89.85 0.00 0.00 5562.00 1438.94 10200.82 00:07:11.079 [2024-11-20T14:16:14.987Z] =================================================================================================================== 00:07:11.079 [2024-11-20T14:16:14.987Z] Total : 23000.32 89.85 0.00 0.00 5562.00 1438.94 10200.82 00:07:11.079 { 00:07:11.079 "results": [ 00:07:11.079 { 00:07:11.079 "job": "Nvme0n1", 00:07:11.079 "core_mask": "0x2", 00:07:11.079 "workload": "randwrite", 00:07:11.079 "status": "finished", 00:07:11.079 "queue_depth": 128, 00:07:11.079 "io_size": 4096, 00:07:11.079 "runtime": 10.00286, 00:07:11.079 "iops": 23000.321907934333, 00:07:11.079 "mibps": 89.84500745286849, 00:07:11.079 "io_failed": 0, 00:07:11.079 "io_timeout": 0, 00:07:11.079 "avg_latency_us": 5561.999883906283, 00:07:11.079 "min_latency_us": 1438.942608695652, 00:07:11.079 "max_latency_us": 10200.820869565217 00:07:11.079 } 00:07:11.079 ], 00:07:11.079 "core_count": 1 00:07:11.079 } 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2015026 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2015026 ']' 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2015026 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015026 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015026' 00:07:11.079 killing process with pid 2015026 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2015026 00:07:11.079 Received shutdown signal, test time was about 10.000000 seconds 00:07:11.079 00:07:11.079 Latency(us) 00:07:11.079 [2024-11-20T14:16:14.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.079 [2024-11-20T14:16:14.987Z] =================================================================================================================== 00:07:11.079 [2024-11-20T14:16:14.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2015026 00:07:11.079 15:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.338 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.596 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:11.596 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:11.596 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:11.596 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:11.596 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:11.855 [2024-11-20 15:16:15.665642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.855 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:12.113 request: 00:07:12.113 { 00:07:12.113 "uuid": "36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6", 00:07:12.113 "method": "bdev_lvol_get_lvstores", 00:07:12.113 "req_id": 1 00:07:12.113 } 00:07:12.113 Got JSON-RPC error response 00:07:12.113 response: 00:07:12.113 { 00:07:12.113 "code": -19, 00:07:12.113 "message": "No such device" 00:07:12.113 } 00:07:12.113 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:12.113 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.113 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.113 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.113 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.372 aio_bdev 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fc03bd0d-c2eb-49a1-9d34-590d2e61a757 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=fc03bd0d-c2eb-49a1-9d34-590d2e61a757 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.372 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.630 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fc03bd0d-c2eb-49a1-9d34-590d2e61a757 -t 2000 00:07:12.630 [ 00:07:12.630 { 00:07:12.630 "name": "fc03bd0d-c2eb-49a1-9d34-590d2e61a757", 00:07:12.630 "aliases": [ 00:07:12.630 "lvs/lvol" 00:07:12.630 ], 00:07:12.630 "product_name": "Logical Volume", 00:07:12.630 "block_size": 4096, 00:07:12.630 "num_blocks": 38912, 00:07:12.630 "uuid": "fc03bd0d-c2eb-49a1-9d34-590d2e61a757", 00:07:12.630 "assigned_rate_limits": { 00:07:12.630 "rw_ios_per_sec": 0, 00:07:12.630 "rw_mbytes_per_sec": 0, 00:07:12.630 "r_mbytes_per_sec": 0, 00:07:12.630 "w_mbytes_per_sec": 0 00:07:12.630 }, 00:07:12.630 "claimed": false, 00:07:12.630 "zoned": false, 00:07:12.630 "supported_io_types": { 00:07:12.630 "read": true, 00:07:12.630 "write": true, 00:07:12.630 "unmap": true, 00:07:12.630 "flush": false, 00:07:12.630 "reset": true, 00:07:12.630 "nvme_admin": false, 00:07:12.630 "nvme_io": false, 00:07:12.630 "nvme_io_md": false, 00:07:12.630 "write_zeroes": true, 00:07:12.630 "zcopy": false, 00:07:12.630 "get_zone_info": false, 00:07:12.630 "zone_management": false, 00:07:12.630 "zone_append": false, 00:07:12.630 "compare": false, 00:07:12.630 "compare_and_write": false, 00:07:12.630 "abort": false, 00:07:12.630 "seek_hole": true, 00:07:12.630 "seek_data": true, 00:07:12.630 "copy": false, 00:07:12.630 "nvme_iov_md": false 00:07:12.630 }, 00:07:12.630 "driver_specific": { 00:07:12.630 "lvol": { 00:07:12.630 "lvol_store_uuid": "36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6", 00:07:12.630 "base_bdev": "aio_bdev", 00:07:12.630 "thin_provision": false, 00:07:12.630 "num_allocated_clusters": 38, 00:07:12.630 "snapshot": false, 00:07:12.630 "clone": false, 00:07:12.630 "esnap_clone": false 00:07:12.630 } 00:07:12.630 } 00:07:12.630 } 00:07:12.630 ] 00:07:12.630 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:12.630 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:12.630 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:12.888 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:12.888 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:12.888 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:13.146 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:13.146 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc03bd0d-c2eb-49a1-9d34-590d2e61a757 00:07:13.405 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36eaa7e9-ff95-4d3b-bdbb-1178c3cf3bf6 00:07:13.405 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.663 00:07:13.663 real 0m15.735s 00:07:13.663 user 0m15.231s 00:07:13.663 sys 0m1.529s 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:13.663 ************************************ 00:07:13.663 END TEST lvs_grow_clean 00:07:13.663 ************************************ 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.663 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.922 ************************************ 00:07:13.922 START TEST lvs_grow_dirty 00:07:13.922 ************************************ 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:13.922 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:14.181 15:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:14.181 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:14.181 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:14.440 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:14.440 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:14.440 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ed26ca00-ec4d-4978-bf3b-9236e4192573 lvol 150 00:07:14.699 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:14.699 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:14.699 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:14.699 [2024-11-20 15:16:18.566917] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:14.699 [2024-11-20 15:16:18.566975] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:14.699 true 00:07:14.699 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:14.699 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:14.958 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:14.958 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.217 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:15.475 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:15.475 [2024-11-20 15:16:19.321158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.475 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2017637 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2017637 /var/tmp/bdevperf.sock 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2017637 ']' 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.733 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.733 [2024-11-20 15:16:19.591269] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:15.733 [2024-11-20 15:16:19.591316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017637 ] 00:07:15.991 [2024-11-20 15:16:19.666024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.991 [2024-11-20 15:16:19.706655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.991 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.991 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.991 15:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:16.250 Nvme0n1 00:07:16.250 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.507 [ 00:07:16.507 { 00:07:16.507 "name": "Nvme0n1", 00:07:16.507 "aliases": [ 00:07:16.507 "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c" 00:07:16.507 ], 00:07:16.507 "product_name": "NVMe disk", 00:07:16.507 "block_size": 4096, 00:07:16.507 "num_blocks": 38912, 00:07:16.507 "uuid": "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c", 00:07:16.507 "numa_id": 1, 00:07:16.507 "assigned_rate_limits": { 00:07:16.507 "rw_ios_per_sec": 0, 00:07:16.507 "rw_mbytes_per_sec": 0, 00:07:16.508 "r_mbytes_per_sec": 0, 00:07:16.508 "w_mbytes_per_sec": 0 00:07:16.508 }, 00:07:16.508 "claimed": false, 00:07:16.508 "zoned": false, 00:07:16.508 "supported_io_types": { 00:07:16.508 "read": true, 00:07:16.508 "write": true, 00:07:16.508 "unmap": true, 00:07:16.508 "flush": true, 00:07:16.508 "reset": true, 00:07:16.508 "nvme_admin": true, 00:07:16.508 "nvme_io": true, 00:07:16.508 "nvme_io_md": false, 00:07:16.508 "write_zeroes": true, 00:07:16.508 "zcopy": false, 00:07:16.508 "get_zone_info": false, 00:07:16.508 "zone_management": false, 00:07:16.508 "zone_append": false, 00:07:16.508 "compare": true, 00:07:16.508 "compare_and_write": true, 00:07:16.508 "abort": true, 00:07:16.508 "seek_hole": false, 00:07:16.508 "seek_data": false, 00:07:16.508 "copy": true, 00:07:16.508 "nvme_iov_md": false 00:07:16.508 }, 00:07:16.508 "memory_domains": [ 00:07:16.508 { 00:07:16.508 "dma_device_id": "system", 00:07:16.508 "dma_device_type": 1 00:07:16.508 } 00:07:16.508 ], 00:07:16.508 "driver_specific": { 00:07:16.508 "nvme": [ 00:07:16.508 { 00:07:16.508 "trid": { 00:07:16.508 "trtype": "TCP", 00:07:16.508 "adrfam": "IPv4", 00:07:16.508 "traddr": "10.0.0.2", 00:07:16.508 "trsvcid": "4420", 00:07:16.508 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:16.508 }, 00:07:16.508 "ctrlr_data": { 00:07:16.508 "cntlid": 1, 00:07:16.508 "vendor_id": "0x8086", 00:07:16.508 "model_number": "SPDK bdev Controller", 00:07:16.508 "serial_number": "SPDK0", 00:07:16.508 "firmware_revision": "25.01", 00:07:16.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.508 "oacs": { 00:07:16.508 "security": 0, 00:07:16.508 "format": 0, 00:07:16.508 "firmware": 0, 00:07:16.508 "ns_manage": 0 00:07:16.508 }, 00:07:16.508 "multi_ctrlr": true, 00:07:16.508 "ana_reporting": false 00:07:16.508 }, 00:07:16.508 "vs": { 00:07:16.508 "nvme_version": "1.3" 00:07:16.508 }, 00:07:16.508 "ns_data": { 00:07:16.508 "id": 1, 00:07:16.508 "can_share": true 00:07:16.508 } 00:07:16.508 } 00:07:16.508 ], 00:07:16.508 "mp_policy": "active_passive" 00:07:16.508 } 00:07:16.508 } 00:07:16.508 ] 00:07:16.508 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2017858 00:07:16.508 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.508 15:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.508 Running I/O for 10 seconds... 00:07:17.883 Latency(us) 00:07:17.883 [2024-11-20T14:16:21.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.883 Nvme0n1 : 1.00 22356.00 87.33 0.00 0.00 0.00 0.00 0.00 00:07:17.883 [2024-11-20T14:16:21.791Z] =================================================================================================================== 00:07:17.883 [2024-11-20T14:16:21.791Z] Total : 22356.00 87.33 0.00 0.00 0.00 0.00 0.00 00:07:17.883 00:07:18.450 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:18.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.710 Nvme0n1 : 2.00 22674.50 88.57 0.00 0.00 0.00 0.00 0.00 00:07:18.710 [2024-11-20T14:16:22.618Z] =================================================================================================================== 00:07:18.710 [2024-11-20T14:16:22.618Z] Total : 22674.50 88.57 0.00 0.00 0.00 0.00 0.00 00:07:18.710 00:07:18.710 true 00:07:18.710 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:18.710 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.968 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.968 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.968 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2017858 00:07:19.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.536 Nvme0n1 : 3.00 22716.00 88.73 0.00 0.00 0.00 0.00 0.00 00:07:19.536 [2024-11-20T14:16:23.444Z] =================================================================================================================== 00:07:19.536 [2024-11-20T14:16:23.444Z] Total : 22716.00 88.73 0.00 0.00 0.00 0.00 0.00 00:07:19.536 00:07:20.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.912 Nvme0n1 : 4.00 22816.25 89.13 0.00 0.00 0.00 0.00 0.00 00:07:20.912 [2024-11-20T14:16:24.820Z] =================================================================================================================== 00:07:20.912 [2024-11-20T14:16:24.820Z] Total : 22816.25 89.13 0.00 0.00 0.00 0.00 0.00 00:07:20.912 00:07:21.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.848 Nvme0n1 : 5.00 22876.40 89.36 0.00 0.00 0.00 0.00 0.00 00:07:21.848 [2024-11-20T14:16:25.756Z] =================================================================================================================== 00:07:21.848 [2024-11-20T14:16:25.756Z] Total : 22876.40 89.36 0.00 0.00 0.00 0.00 0.00 00:07:21.848 00:07:22.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.785 Nvme0n1 : 6.00 22927.83 89.56 0.00 0.00 0.00 0.00 0.00 00:07:22.785 [2024-11-20T14:16:26.693Z] =================================================================================================================== 00:07:22.785 [2024-11-20T14:16:26.693Z] Total : 22927.83 89.56 0.00 0.00 0.00 0.00 0.00 00:07:22.785 00:07:23.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.723 Nvme0n1 : 7.00 22958.43 89.68 0.00 0.00 0.00 0.00 0.00 00:07:23.723 [2024-11-20T14:16:27.631Z] =================================================================================================================== 00:07:23.723 [2024-11-20T14:16:27.631Z] Total : 22958.43 89.68 0.00 0.00 0.00 0.00 0.00 00:07:23.723 00:07:24.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.661 Nvme0n1 : 8.00 22982.00 89.77 0.00 0.00 0.00 0.00 0.00 00:07:24.661 [2024-11-20T14:16:28.569Z] =================================================================================================================== 00:07:24.661 [2024-11-20T14:16:28.569Z] Total : 22982.00 89.77 0.00 0.00 0.00 0.00 0.00 00:07:24.661 00:07:25.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.595 Nvme0n1 : 9.00 22997.67 89.83 0.00 0.00 0.00 0.00 0.00 00:07:25.595 [2024-11-20T14:16:29.503Z] =================================================================================================================== 00:07:25.595 [2024-11-20T14:16:29.503Z] Total : 22997.67 89.83 0.00 0.00 0.00 0.00 0.00 00:07:25.595 00:07:26.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.530 Nvme0n1 : 10.00 23003.70 89.86 0.00 0.00 0.00 0.00 0.00 00:07:26.530 [2024-11-20T14:16:30.438Z] =================================================================================================================== 00:07:26.530 [2024-11-20T14:16:30.438Z] Total : 23003.70 89.86 0.00 0.00 0.00 0.00 0.00 00:07:26.530 00:07:26.530 00:07:26.530 Latency(us) 00:07:26.530 [2024-11-20T14:16:30.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.530 Nvme0n1 : 10.00 22997.89 89.84 0.00 0.00 5562.04 3333.79 12594.31 00:07:26.530 [2024-11-20T14:16:30.438Z] =================================================================================================================== 00:07:26.530 [2024-11-20T14:16:30.438Z] Total : 22997.89 89.84 0.00 0.00 5562.04 3333.79 12594.31 00:07:26.530 { 00:07:26.530 "results": [ 00:07:26.530 { 00:07:26.530 "job": "Nvme0n1", 00:07:26.530 "core_mask": "0x2", 00:07:26.530 "workload": "randwrite", 00:07:26.530 "status": "finished", 00:07:26.530 "queue_depth": 128, 00:07:26.530 "io_size": 4096, 00:07:26.530 "runtime": 10.002569, 00:07:26.530 "iops": 22997.891841585897, 00:07:26.530 "mibps": 89.83551500619491, 00:07:26.530 "io_failed": 0, 00:07:26.530 "io_timeout": 0, 00:07:26.530 "avg_latency_us": 5562.042408630407, 00:07:26.530 "min_latency_us": 3333.7878260869566, 00:07:26.530 "max_latency_us": 12594.30956521739 00:07:26.530 } 00:07:26.530 ], 00:07:26.530 "core_count": 1 00:07:26.530 } 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2017637 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2017637 ']' 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2017637 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017637 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017637' 00:07:26.789 killing process with pid 2017637 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2017637 00:07:26.789 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.789 00:07:26.789 Latency(us) 00:07:26.789 [2024-11-20T14:16:30.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.789 [2024-11-20T14:16:30.697Z] =================================================================================================================== 00:07:26.789 [2024-11-20T14:16:30.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2017637 00:07:26.789 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.048 15:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.307 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:27.307 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2014531 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2014531 00:07:27.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2014531 Killed "${NVMF_APP[@]}" "$@" 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2019711 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2019711 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2019711 ']' 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.566 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.566 [2024-11-20 15:16:31.404645] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:27.566 [2024-11-20 15:16:31.404695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.825 [2024-11-20 15:16:31.486225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.825 [2024-11-20 15:16:31.526754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.825 [2024-11-20 15:16:31.526791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.825 [2024-11-20 15:16:31.526798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.825 [2024-11-20 15:16:31.526804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.825 [2024-11-20 15:16:31.526809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.825 [2024-11-20 15:16:31.527409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.825 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.084 [2024-11-20 15:16:31.823826] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:28.084 [2024-11-20 15:16:31.823921] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:28.084 [2024-11-20 15:16:31.823954] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.084 15:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.343 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c -t 2000 00:07:28.343 [ 00:07:28.343 { 00:07:28.343 "name": "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c", 00:07:28.343 "aliases": [ 00:07:28.343 "lvs/lvol" 00:07:28.343 ], 00:07:28.343 "product_name": "Logical Volume", 00:07:28.343 "block_size": 4096, 00:07:28.343 "num_blocks": 38912, 00:07:28.343 "uuid": "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c", 00:07:28.343 "assigned_rate_limits": { 00:07:28.343 "rw_ios_per_sec": 0, 00:07:28.343 "rw_mbytes_per_sec": 0, 00:07:28.343 "r_mbytes_per_sec": 0, 00:07:28.343 "w_mbytes_per_sec": 0 00:07:28.343 }, 00:07:28.343 "claimed": false, 00:07:28.343 "zoned": false, 00:07:28.343 "supported_io_types": { 00:07:28.343 "read": true, 00:07:28.343 "write": true, 00:07:28.343 "unmap": true, 00:07:28.343 "flush": false, 00:07:28.343 "reset": true, 00:07:28.343 "nvme_admin": false, 00:07:28.343 "nvme_io": false, 00:07:28.343 "nvme_io_md": false, 00:07:28.343 "write_zeroes": true, 00:07:28.343 "zcopy": false, 00:07:28.343 "get_zone_info": false, 00:07:28.343 "zone_management": false, 00:07:28.343 "zone_append": false, 00:07:28.343 "compare": false, 00:07:28.343 "compare_and_write": false, 00:07:28.343 "abort": false, 00:07:28.343 "seek_hole": true, 00:07:28.343 "seek_data": true, 00:07:28.343 "copy": false, 00:07:28.343 "nvme_iov_md": false 00:07:28.343 }, 00:07:28.343 "driver_specific": { 00:07:28.343 "lvol": { 00:07:28.343 "lvol_store_uuid": "ed26ca00-ec4d-4978-bf3b-9236e4192573", 00:07:28.343 "base_bdev": "aio_bdev", 00:07:28.343 "thin_provision": false, 00:07:28.343 "num_allocated_clusters": 38, 00:07:28.343 "snapshot": false, 00:07:28.343 "clone": false, 00:07:28.343 "esnap_clone": false 00:07:28.343 } 00:07:28.343 } 00:07:28.343 } 00:07:28.343 ] 00:07:28.343 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:28.602 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:28.602 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:28.602 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:28.602 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:28.602 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:28.861 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:28.861 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.120 [2024-11-20 15:16:32.796742] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.120 15:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:29.120 request: 00:07:29.120 { 00:07:29.120 "uuid": "ed26ca00-ec4d-4978-bf3b-9236e4192573", 00:07:29.120 "method": "bdev_lvol_get_lvstores", 00:07:29.120 "req_id": 1 00:07:29.120 } 00:07:29.120 Got JSON-RPC error response 00:07:29.120 response: 00:07:29.120 { 00:07:29.120 "code": -19, 00:07:29.120 "message": "No such device" 00:07:29.120 } 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.379 aio_bdev 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.379 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:29.638 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c -t 2000 00:07:29.897 [ 00:07:29.897 { 00:07:29.897 "name": "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c", 00:07:29.897 "aliases": [ 00:07:29.897 "lvs/lvol" 00:07:29.897 ], 00:07:29.897 "product_name": "Logical Volume", 00:07:29.897 "block_size": 4096, 00:07:29.897 "num_blocks": 38912, 00:07:29.897 "uuid": "2aa9a747-6e3c-414c-ae21-e3a34dc4a73c", 00:07:29.897 "assigned_rate_limits": { 00:07:29.897 "rw_ios_per_sec": 0, 00:07:29.897 "rw_mbytes_per_sec": 0, 00:07:29.897 "r_mbytes_per_sec": 0, 00:07:29.897 "w_mbytes_per_sec": 0 00:07:29.897 }, 00:07:29.897 "claimed": false, 00:07:29.897 "zoned": false, 00:07:29.897 "supported_io_types": { 00:07:29.897 "read": true, 00:07:29.897 "write": true, 00:07:29.897 "unmap": true, 00:07:29.897 "flush": false, 00:07:29.897 "reset": true, 00:07:29.897 "nvme_admin": false, 00:07:29.897 "nvme_io": false, 00:07:29.897 "nvme_io_md": false, 00:07:29.897 "write_zeroes": true, 00:07:29.897 "zcopy": false, 00:07:29.897 "get_zone_info": false, 00:07:29.897 "zone_management": false, 00:07:29.897 "zone_append": false, 00:07:29.897 "compare": false, 00:07:29.897 "compare_and_write": false, 00:07:29.897 "abort": false, 00:07:29.897 "seek_hole": true, 00:07:29.897 "seek_data": true, 00:07:29.897 "copy": false, 00:07:29.897 "nvme_iov_md": false 00:07:29.897 }, 00:07:29.897 "driver_specific": { 00:07:29.897 "lvol": { 00:07:29.897 "lvol_store_uuid": "ed26ca00-ec4d-4978-bf3b-9236e4192573", 00:07:29.897 "base_bdev": "aio_bdev", 00:07:29.897 "thin_provision": false, 00:07:29.897 "num_allocated_clusters": 38, 00:07:29.897 "snapshot": false, 00:07:29.897 "clone": false, 00:07:29.897 "esnap_clone": false 00:07:29.897 } 00:07:29.897 } 00:07:29.897 } 00:07:29.897 ] 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:29.897 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:30.156 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:30.156 15:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2aa9a747-6e3c-414c-ae21-e3a34dc4a73c 00:07:30.414 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed26ca00-ec4d-4978-bf3b-9236e4192573 00:07:30.673 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.932 00:07:30.932 real 0m17.031s 00:07:30.932 user 0m44.034s 00:07:30.932 sys 0m3.797s 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.932 ************************************ 00:07:30.932 END TEST lvs_grow_dirty 00:07:30.932 ************************************ 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:30.932 nvmf_trace.0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.932 rmmod nvme_tcp 00:07:30.932 rmmod nvme_fabrics 00:07:30.932 rmmod nvme_keyring 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2019711 ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2019711 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2019711 ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2019711 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2019711 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2019711' 00:07:30.932 killing process with pid 2019711 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2019711 00:07:30.932 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2019711 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.191 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.726 00:07:33.726 real 0m42.076s 00:07:33.726 user 1m4.907s 00:07:33.726 sys 0m10.350s 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.726 ************************************ 00:07:33.726 END TEST nvmf_lvs_grow 00:07:33.726 ************************************ 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.726 ************************************ 00:07:33.726 START TEST nvmf_bdev_io_wait 00:07:33.726 ************************************ 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:33.726 * Looking for test storage... 00:07:33.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.726 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.727 --rc genhtml_branch_coverage=1 00:07:33.727 --rc genhtml_function_coverage=1 00:07:33.727 --rc genhtml_legend=1 00:07:33.727 --rc geninfo_all_blocks=1 00:07:33.727 --rc geninfo_unexecuted_blocks=1 00:07:33.727 00:07:33.727 ' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.727 --rc genhtml_branch_coverage=1 00:07:33.727 --rc genhtml_function_coverage=1 00:07:33.727 --rc genhtml_legend=1 00:07:33.727 --rc geninfo_all_blocks=1 00:07:33.727 --rc geninfo_unexecuted_blocks=1 00:07:33.727 00:07:33.727 ' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.727 --rc genhtml_branch_coverage=1 00:07:33.727 --rc genhtml_function_coverage=1 00:07:33.727 --rc genhtml_legend=1 00:07:33.727 --rc geninfo_all_blocks=1 00:07:33.727 --rc geninfo_unexecuted_blocks=1 00:07:33.727 00:07:33.727 ' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.727 --rc genhtml_branch_coverage=1 00:07:33.727 --rc genhtml_function_coverage=1 00:07:33.727 --rc genhtml_legend=1 00:07:33.727 --rc geninfo_all_blocks=1 00:07:33.727 --rc geninfo_unexecuted_blocks=1 00:07:33.727 00:07:33.727 ' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.727 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.728 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.728 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.728 15:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.121 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.122 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.381 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.381 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.381 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:07:39.382 00:07:39.382 --- 10.0.0.2 ping statistics --- 00:07:39.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.382 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:07:39.382 00:07:39.382 --- 10.0.0.1 ping statistics --- 00:07:39.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.382 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.382 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2023829 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2023829 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2023829 ']' 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.641 [2024-11-20 15:16:43.352574] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:39.641 [2024-11-20 15:16:43.352621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.641 [2024-11-20 15:16:43.432545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.641 [2024-11-20 15:16:43.476706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.641 [2024-11-20 15:16:43.476745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.641 [2024-11-20 15:16:43.476752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.641 [2024-11-20 15:16:43.476758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.641 [2024-11-20 15:16:43.476763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.641 [2024-11-20 15:16:43.478231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.641 [2024-11-20 15:16:43.478338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.641 [2024-11-20 15:16:43.478448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.641 [2024-11-20 15:16:43.478449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.641 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 [2024-11-20 15:16:43.607009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 Malloc0 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.901 [2024-11-20 15:16:43.650596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2024018 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2024020 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2024021 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2024023 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.901 { 00:07:39.901 "params": { 00:07:39.901 "name": "Nvme$subsystem", 00:07:39.901 "trtype": "$TEST_TRANSPORT", 00:07:39.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.901 "adrfam": "ipv4", 00:07:39.901 "trsvcid": "$NVMF_PORT", 00:07:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.901 "hdgst": ${hdgst:-false}, 00:07:39.901 "ddgst": ${ddgst:-false} 00:07:39.901 }, 00:07:39.901 "method": "bdev_nvme_attach_controller" 00:07:39.901 } 00:07:39.901 EOF 00:07:39.901 )") 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.901 { 00:07:39.901 "params": { 00:07:39.901 "name": "Nvme$subsystem", 00:07:39.901 "trtype": "$TEST_TRANSPORT", 00:07:39.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.901 "adrfam": "ipv4", 00:07:39.901 "trsvcid": "$NVMF_PORT", 00:07:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.901 "hdgst": ${hdgst:-false}, 00:07:39.901 "ddgst": ${ddgst:-false} 00:07:39.901 }, 00:07:39.901 "method": "bdev_nvme_attach_controller" 00:07:39.901 } 00:07:39.901 EOF 00:07:39.901 )") 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.901 { 00:07:39.901 "params": { 00:07:39.901 "name": "Nvme$subsystem", 00:07:39.901 "trtype": "$TEST_TRANSPORT", 00:07:39.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.901 "adrfam": "ipv4", 00:07:39.901 "trsvcid": "$NVMF_PORT", 00:07:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.901 "hdgst": ${hdgst:-false}, 00:07:39.901 "ddgst": ${ddgst:-false} 00:07:39.901 }, 00:07:39.901 "method": "bdev_nvme_attach_controller" 00:07:39.901 } 00:07:39.901 EOF 00:07:39.901 )") 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.901 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.901 { 00:07:39.901 "params": { 00:07:39.901 "name": "Nvme$subsystem", 00:07:39.901 "trtype": "$TEST_TRANSPORT", 00:07:39.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.901 "adrfam": "ipv4", 00:07:39.901 "trsvcid": "$NVMF_PORT", 00:07:39.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.901 "hdgst": ${hdgst:-false}, 00:07:39.901 "ddgst": ${ddgst:-false} 00:07:39.902 }, 00:07:39.902 "method": "bdev_nvme_attach_controller" 00:07:39.902 } 00:07:39.902 EOF 00:07:39.902 )") 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2024018 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.902 "params": { 00:07:39.902 "name": "Nvme1", 00:07:39.902 "trtype": "tcp", 00:07:39.902 "traddr": "10.0.0.2", 00:07:39.902 "adrfam": "ipv4", 00:07:39.902 "trsvcid": "4420", 00:07:39.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.902 "hdgst": false, 00:07:39.902 "ddgst": false 00:07:39.902 }, 00:07:39.902 "method": "bdev_nvme_attach_controller" 00:07:39.902 }' 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.902 "params": { 00:07:39.902 "name": "Nvme1", 00:07:39.902 "trtype": "tcp", 00:07:39.902 "traddr": "10.0.0.2", 00:07:39.902 "adrfam": "ipv4", 00:07:39.902 "trsvcid": "4420", 00:07:39.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.902 "hdgst": false, 00:07:39.902 "ddgst": false 00:07:39.902 }, 00:07:39.902 "method": "bdev_nvme_attach_controller" 00:07:39.902 }' 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.902 "params": { 00:07:39.902 "name": "Nvme1", 00:07:39.902 "trtype": "tcp", 00:07:39.902 "traddr": "10.0.0.2", 00:07:39.902 "adrfam": "ipv4", 00:07:39.902 "trsvcid": "4420", 00:07:39.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.902 "hdgst": false, 00:07:39.902 "ddgst": false 00:07:39.902 }, 00:07:39.902 "method": "bdev_nvme_attach_controller" 00:07:39.902 }' 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.902 15:16:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.902 "params": { 00:07:39.902 "name": "Nvme1", 00:07:39.902 "trtype": "tcp", 00:07:39.902 "traddr": "10.0.0.2", 00:07:39.902 "adrfam": "ipv4", 00:07:39.902 "trsvcid": "4420", 00:07:39.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.902 "hdgst": false, 00:07:39.902 "ddgst": false 00:07:39.902 }, 00:07:39.902 "method": "bdev_nvme_attach_controller" 00:07:39.902 }' 00:07:39.902 [2024-11-20 15:16:43.699892] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:39.902 [2024-11-20 15:16:43.699940] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:39.902 [2024-11-20 15:16:43.702137] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:39.902 [2024-11-20 15:16:43.702174] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:39.902 [2024-11-20 15:16:43.703799] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:39.902 [2024-11-20 15:16:43.703846] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:39.902 [2024-11-20 15:16:43.706324] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:39.902 [2024-11-20 15:16:43.706364] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:40.161 [2024-11-20 15:16:43.885932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.161 [2024-11-20 15:16:43.928941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:40.161 [2024-11-20 15:16:43.978331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.161 [2024-11-20 15:16:44.021267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:40.419 [2024-11-20 15:16:44.070538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.419 [2024-11-20 15:16:44.130155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:40.419 [2024-11-20 15:16:44.138531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.419 [2024-11-20 15:16:44.181499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:40.419 Running I/O for 1 seconds... 00:07:40.419 Running I/O for 1 seconds... 00:07:40.677 Running I/O for 1 seconds... 00:07:40.677 Running I/O for 1 seconds... 00:07:41.613 11573.00 IOPS, 45.21 MiB/s 00:07:41.614 Latency(us) 00:07:41.614 [2024-11-20T14:16:45.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.614 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:41.614 Nvme1n1 : 1.01 11620.43 45.39 0.00 0.00 10974.26 6154.69 15728.64 00:07:41.614 [2024-11-20T14:16:45.522Z] =================================================================================================================== 00:07:41.614 [2024-11-20T14:16:45.522Z] Total : 11620.43 45.39 0.00 0.00 10974.26 6154.69 15728.64 00:07:41.614 10350.00 IOPS, 40.43 MiB/s 00:07:41.614 Latency(us) 00:07:41.614 [2024-11-20T14:16:45.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.614 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:41.614 Nvme1n1 : 1.01 10420.18 40.70 0.00 0.00 12244.06 4758.48 20857.54 00:07:41.614 [2024-11-20T14:16:45.522Z] =================================================================================================================== 00:07:41.614 [2024-11-20T14:16:45.522Z] Total : 10420.18 40.70 0.00 0.00 12244.06 4758.48 20857.54 00:07:41.614 9833.00 IOPS, 38.41 MiB/s 00:07:41.614 Latency(us) 00:07:41.614 [2024-11-20T14:16:45.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.614 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:41.614 Nvme1n1 : 1.01 9922.10 38.76 0.00 0.00 12866.36 3647.22 25872.47 00:07:41.614 [2024-11-20T14:16:45.522Z] =================================================================================================================== 00:07:41.614 [2024-11-20T14:16:45.522Z] Total : 9922.10 38.76 0.00 0.00 12866.36 3647.22 25872.47 00:07:41.614 238296.00 IOPS, 930.84 MiB/s 00:07:41.614 Latency(us) 00:07:41.614 [2024-11-20T14:16:45.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.614 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:41.614 Nvme1n1 : 1.00 237921.42 929.38 0.00 0.00 535.01 235.07 1552.92 00:07:41.614 [2024-11-20T14:16:45.522Z] =================================================================================================================== 00:07:41.614 [2024-11-20T14:16:45.522Z] Total : 237921.42 929.38 0.00 0.00 535.01 235.07 1552.92 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2024020 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2024021 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2024023 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.614 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.614 rmmod nvme_tcp 00:07:41.614 rmmod nvme_fabrics 00:07:41.873 rmmod nvme_keyring 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2023829 ']' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2023829 ']' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023829' 00:07:41.873 killing process with pid 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2023829 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.873 15:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.414 00:07:44.414 real 0m10.724s 00:07:44.414 user 0m15.867s 00:07:44.414 sys 0m6.199s 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 ************************************ 00:07:44.414 END TEST nvmf_bdev_io_wait 00:07:44.414 ************************************ 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 ************************************ 00:07:44.414 START TEST nvmf_queue_depth 00:07:44.414 ************************************ 00:07:44.414 15:16:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:44.414 * Looking for test storage... 00:07:44.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.414 --rc genhtml_branch_coverage=1 00:07:44.414 --rc genhtml_function_coverage=1 00:07:44.414 --rc genhtml_legend=1 00:07:44.414 --rc geninfo_all_blocks=1 00:07:44.414 --rc geninfo_unexecuted_blocks=1 00:07:44.414 00:07:44.414 ' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.414 --rc genhtml_branch_coverage=1 00:07:44.414 --rc genhtml_function_coverage=1 00:07:44.414 --rc genhtml_legend=1 00:07:44.414 --rc geninfo_all_blocks=1 00:07:44.414 --rc geninfo_unexecuted_blocks=1 00:07:44.414 00:07:44.414 ' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.414 --rc genhtml_branch_coverage=1 00:07:44.414 --rc genhtml_function_coverage=1 00:07:44.414 --rc genhtml_legend=1 00:07:44.414 --rc geninfo_all_blocks=1 00:07:44.414 --rc geninfo_unexecuted_blocks=1 00:07:44.414 00:07:44.414 ' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.414 --rc genhtml_branch_coverage=1 00:07:44.414 --rc genhtml_function_coverage=1 00:07:44.414 --rc genhtml_legend=1 00:07:44.414 --rc geninfo_all_blocks=1 00:07:44.414 --rc geninfo_unexecuted_blocks=1 00:07:44.414 00:07:44.414 ' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.414 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.415 15:16:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.999 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:51.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:51.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:51.000 Found net devices under 0000:86:00.0: cvl_0_0 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:51.000 Found net devices under 0000:86:00.1: cvl_0_1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.000 15:16:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:07:51.000 00:07:51.000 --- 10.0.0.2 ping statistics --- 00:07:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.000 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:51.000 00:07:51.000 --- 10.0.0.1 ping statistics --- 00:07:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.000 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2027815 00:07:51.000 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2027815 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2027815 ']' 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 [2024-11-20 15:16:54.143729] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:51.001 [2024-11-20 15:16:54.143780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.001 [2024-11-20 15:16:54.225443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.001 [2024-11-20 15:16:54.267084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.001 [2024-11-20 15:16:54.267118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.001 [2024-11-20 15:16:54.267125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.001 [2024-11-20 15:16:54.267132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.001 [2024-11-20 15:16:54.267138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.001 [2024-11-20 15:16:54.267673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 [2024-11-20 15:16:54.410795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 Malloc0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 [2024-11-20 15:16:54.461141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2027843 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2027843 /var/tmp/bdevperf.sock 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2027843 ']' 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 [2024-11-20 15:16:54.513641] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:07:51.001 [2024-11-20 15:16:54.513691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027843 ] 00:07:51.001 [2024-11-20 15:16:54.588450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.001 [2024-11-20 15:16:54.629651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.001 NVMe0n1 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.001 15:16:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:51.261 Running I/O for 10 seconds... 00:07:53.134 11305.00 IOPS, 44.16 MiB/s [2024-11-20T14:16:58.419Z] 11781.00 IOPS, 46.02 MiB/s [2024-11-20T14:16:58.988Z] 11950.33 IOPS, 46.68 MiB/s [2024-11-20T14:17:00.366Z] 12026.50 IOPS, 46.98 MiB/s [2024-11-20T14:17:01.302Z] 12072.80 IOPS, 47.16 MiB/s [2024-11-20T14:17:02.237Z] 12074.00 IOPS, 47.16 MiB/s [2024-11-20T14:17:03.174Z] 12130.14 IOPS, 47.38 MiB/s [2024-11-20T14:17:04.109Z] 12149.00 IOPS, 47.46 MiB/s [2024-11-20T14:17:05.045Z] 12164.33 IOPS, 47.52 MiB/s [2024-11-20T14:17:05.303Z] 12175.70 IOPS, 47.56 MiB/s 00:08:01.395 Latency(us) 00:08:01.395 [2024-11-20T14:17:05.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:01.395 Verification LBA range: start 0x0 length 0x4000 00:08:01.395 NVMe0n1 : 10.06 12207.82 47.69 0.00 0.00 83623.84 19375.86 54480.36 00:08:01.395 [2024-11-20T14:17:05.303Z] =================================================================================================================== 00:08:01.395 [2024-11-20T14:17:05.303Z] Total : 12207.82 47.69 0.00 0.00 83623.84 19375.86 54480.36 00:08:01.395 { 00:08:01.395 "results": [ 00:08:01.395 { 00:08:01.395 "job": "NVMe0n1", 00:08:01.395 "core_mask": "0x1", 00:08:01.395 "workload": "verify", 00:08:01.395 "status": "finished", 00:08:01.395 "verify_range": { 00:08:01.395 "start": 0, 00:08:01.395 "length": 16384 00:08:01.395 }, 00:08:01.395 "queue_depth": 1024, 00:08:01.395 "io_size": 4096, 00:08:01.395 "runtime": 10.056995, 00:08:01.395 "iops": 12207.821521239694, 00:08:01.395 "mibps": 47.68680281734255, 00:08:01.395 "io_failed": 0, 00:08:01.395 "io_timeout": 0, 00:08:01.395 "avg_latency_us": 83623.8396386149, 00:08:01.395 "min_latency_us": 19375.86086956522, 00:08:01.395 "max_latency_us": 54480.361739130436 00:08:01.395 } 00:08:01.395 ], 00:08:01.395 "core_count": 1 00:08:01.395 } 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2027843 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2027843 ']' 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2027843 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027843 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027843' 00:08:01.395 killing process with pid 2027843 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2027843 00:08:01.395 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.395 00:08:01.395 Latency(us) 00:08:01.395 [2024-11-20T14:17:05.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.395 [2024-11-20T14:17:05.303Z] =================================================================================================================== 00:08:01.395 [2024-11-20T14:17:05.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2027843 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.395 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.395 rmmod nvme_tcp 00:08:01.653 rmmod nvme_fabrics 00:08:01.653 rmmod nvme_keyring 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2027815 ']' 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2027815 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2027815 ']' 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2027815 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027815 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027815' 00:08:01.653 killing process with pid 2027815 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2027815 00:08:01.653 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2027815 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.911 15:17:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:03.819 00:08:03.819 real 0m19.737s 00:08:03.819 user 0m23.226s 00:08:03.819 sys 0m5.968s 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.819 ************************************ 00:08:03.819 END TEST nvmf_queue_depth 00:08:03.819 ************************************ 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.819 15:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.077 ************************************ 00:08:04.077 START TEST nvmf_target_multipath 00:08:04.077 ************************************ 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:04.077 * Looking for test storage... 00:08:04.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.077 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.078 --rc genhtml_branch_coverage=1 00:08:04.078 --rc genhtml_function_coverage=1 00:08:04.078 --rc genhtml_legend=1 00:08:04.078 --rc geninfo_all_blocks=1 00:08:04.078 --rc geninfo_unexecuted_blocks=1 00:08:04.078 00:08:04.078 ' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.078 --rc genhtml_branch_coverage=1 00:08:04.078 --rc genhtml_function_coverage=1 00:08:04.078 --rc genhtml_legend=1 00:08:04.078 --rc geninfo_all_blocks=1 00:08:04.078 --rc geninfo_unexecuted_blocks=1 00:08:04.078 00:08:04.078 ' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.078 --rc genhtml_branch_coverage=1 00:08:04.078 --rc genhtml_function_coverage=1 00:08:04.078 --rc genhtml_legend=1 00:08:04.078 --rc geninfo_all_blocks=1 00:08:04.078 --rc geninfo_unexecuted_blocks=1 00:08:04.078 00:08:04.078 ' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.078 --rc genhtml_branch_coverage=1 00:08:04.078 --rc genhtml_function_coverage=1 00:08:04.078 --rc genhtml_legend=1 00:08:04.078 --rc geninfo_all_blocks=1 00:08:04.078 --rc geninfo_unexecuted_blocks=1 00:08:04.078 00:08:04.078 ' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.078 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.079 15:17:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:10.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:10.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:10.659 Found net devices under 0000:86:00.0: cvl_0_0 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:10.659 Found net devices under 0000:86:00.1: cvl_0_1 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.659 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:08:10.660 00:08:10.660 --- 10.0.0.2 ping statistics --- 00:08:10.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.660 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:08:10.660 00:08:10.660 --- 10.0.0.1 ping statistics --- 00:08:10.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.660 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:10.660 only one NIC for nvmf test 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.660 15:17:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.660 rmmod nvme_tcp 00:08:10.660 rmmod nvme_fabrics 00:08:10.660 rmmod nvme_keyring 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.660 15:17:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.570 00:08:12.570 real 0m8.408s 00:08:12.570 user 0m1.832s 00:08:12.570 sys 0m4.601s 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.570 ************************************ 00:08:12.570 END TEST nvmf_target_multipath 00:08:12.570 ************************************ 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.570 ************************************ 00:08:12.570 START TEST nvmf_zcopy 00:08:12.570 ************************************ 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:12.570 * Looking for test storage... 00:08:12.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.570 --rc genhtml_branch_coverage=1 00:08:12.570 --rc genhtml_function_coverage=1 00:08:12.570 --rc genhtml_legend=1 00:08:12.570 --rc geninfo_all_blocks=1 00:08:12.570 --rc geninfo_unexecuted_blocks=1 00:08:12.570 00:08:12.570 ' 00:08:12.570 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.571 --rc genhtml_branch_coverage=1 00:08:12.571 --rc genhtml_function_coverage=1 00:08:12.571 --rc genhtml_legend=1 00:08:12.571 --rc geninfo_all_blocks=1 00:08:12.571 --rc geninfo_unexecuted_blocks=1 00:08:12.571 00:08:12.571 ' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.571 --rc genhtml_branch_coverage=1 00:08:12.571 --rc genhtml_function_coverage=1 00:08:12.571 --rc genhtml_legend=1 00:08:12.571 --rc geninfo_all_blocks=1 00:08:12.571 --rc geninfo_unexecuted_blocks=1 00:08:12.571 00:08:12.571 ' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.571 --rc genhtml_branch_coverage=1 00:08:12.571 --rc genhtml_function_coverage=1 00:08:12.571 --rc genhtml_legend=1 00:08:12.571 --rc geninfo_all_blocks=1 00:08:12.571 --rc geninfo_unexecuted_blocks=1 00:08:12.571 00:08:12.571 ' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.571 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.147 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.147 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.148 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.148 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:08:19.148 00:08:19.148 --- 10.0.0.2 ping statistics --- 00:08:19.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.148 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:08:19.148 00:08:19.148 --- 10.0.0.1 ping statistics --- 00:08:19.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.148 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2036760 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2036760 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2036760 ']' 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.148 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.148 [2024-11-20 15:17:22.460352] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:08:19.148 [2024-11-20 15:17:22.460403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.148 [2024-11-20 15:17:22.541023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.148 [2024-11-20 15:17:22.582019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.148 [2024-11-20 15:17:22.582057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.148 [2024-11-20 15:17:22.582064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.148 [2024-11-20 15:17:22.582070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.148 [2024-11-20 15:17:22.582075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.149 [2024-11-20 15:17:22.582627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 [2024-11-20 15:17:22.719105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 [2024-11-20 15:17:22.739283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 malloc0 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.149 { 00:08:19.149 "params": { 00:08:19.149 "name": "Nvme$subsystem", 00:08:19.149 "trtype": "$TEST_TRANSPORT", 00:08:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.149 "adrfam": "ipv4", 00:08:19.149 "trsvcid": "$NVMF_PORT", 00:08:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.149 "hdgst": ${hdgst:-false}, 00:08:19.149 "ddgst": ${ddgst:-false} 00:08:19.149 }, 00:08:19.149 "method": "bdev_nvme_attach_controller" 00:08:19.149 } 00:08:19.149 EOF 00:08:19.149 )") 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:19.149 15:17:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.149 "params": { 00:08:19.149 "name": "Nvme1", 00:08:19.149 "trtype": "tcp", 00:08:19.149 "traddr": "10.0.0.2", 00:08:19.149 "adrfam": "ipv4", 00:08:19.149 "trsvcid": "4420", 00:08:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.149 "hdgst": false, 00:08:19.149 "ddgst": false 00:08:19.149 }, 00:08:19.149 "method": "bdev_nvme_attach_controller" 00:08:19.149 }' 00:08:19.149 [2024-11-20 15:17:22.818820] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:08:19.149 [2024-11-20 15:17:22.818866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036902 ] 00:08:19.149 [2024-11-20 15:17:22.890479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.149 [2024-11-20 15:17:22.943781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.410 Running I/O for 10 seconds... 00:08:21.285 8409.00 IOPS, 65.70 MiB/s [2024-11-20T14:17:26.132Z] 8478.50 IOPS, 66.24 MiB/s [2024-11-20T14:17:27.511Z] 8486.00 IOPS, 66.30 MiB/s [2024-11-20T14:17:28.449Z] 8491.75 IOPS, 66.34 MiB/s [2024-11-20T14:17:29.388Z] 8499.00 IOPS, 66.40 MiB/s [2024-11-20T14:17:30.325Z] 8517.33 IOPS, 66.54 MiB/s [2024-11-20T14:17:31.259Z] 8521.43 IOPS, 66.57 MiB/s [2024-11-20T14:17:32.195Z] 8516.25 IOPS, 66.53 MiB/s [2024-11-20T14:17:33.574Z] 8519.89 IOPS, 66.56 MiB/s [2024-11-20T14:17:33.574Z] 8529.20 IOPS, 66.63 MiB/s 00:08:29.666 Latency(us) 00:08:29.666 [2024-11-20T14:17:33.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.666 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:29.666 Verification LBA range: start 0x0 length 0x1000 00:08:29.666 Nvme1n1 : 10.01 8529.09 66.63 0.00 0.00 14963.83 407.82 22111.28 00:08:29.666 [2024-11-20T14:17:33.574Z] =================================================================================================================== 00:08:29.666 [2024-11-20T14:17:33.574Z] Total : 8529.09 66.63 0.00 0.00 14963.83 407.82 22111.28 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2038618 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.666 { 00:08:29.666 "params": { 00:08:29.666 "name": "Nvme$subsystem", 00:08:29.666 "trtype": "$TEST_TRANSPORT", 00:08:29.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.666 "adrfam": "ipv4", 00:08:29.666 "trsvcid": "$NVMF_PORT", 00:08:29.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.666 "hdgst": ${hdgst:-false}, 00:08:29.666 "ddgst": ${ddgst:-false} 00:08:29.666 }, 00:08:29.666 "method": "bdev_nvme_attach_controller" 00:08:29.666 } 00:08:29.666 EOF 00:08:29.666 )") 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:29.666 [2024-11-20 15:17:33.319596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.319629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:29.666 15:17:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.666 "params": { 00:08:29.666 "name": "Nvme1", 00:08:29.666 "trtype": "tcp", 00:08:29.666 "traddr": "10.0.0.2", 00:08:29.666 "adrfam": "ipv4", 00:08:29.666 "trsvcid": "4420", 00:08:29.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.666 "hdgst": false, 00:08:29.666 "ddgst": false 00:08:29.666 }, 00:08:29.666 "method": "bdev_nvme_attach_controller" 00:08:29.666 }' 00:08:29.666 [2024-11-20 15:17:33.331602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.331618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.343624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.343634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.355654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.355664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.359402] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:08:29.666 [2024-11-20 15:17:33.359444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038618 ] 00:08:29.666 [2024-11-20 15:17:33.367691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.367702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.379718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.379728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.391754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.391764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.403784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.403794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.415813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.415823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.427846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.427856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.435047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.666 [2024-11-20 15:17:33.439878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.439888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.451914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.451928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.463952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.463963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.475980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.475992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.477056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.666 [2024-11-20 15:17:33.488025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.488044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.500049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.500065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.512077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.512092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.524109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.524121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.536140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.536152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.548169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.548179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.666 [2024-11-20 15:17:33.560199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.666 [2024-11-20 15:17:33.560209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.572249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.572275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.584276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.584290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.596304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.596318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.608335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.608345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.620365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.620374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.632397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.632408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.644431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.644446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.656466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.656481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.668500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.668512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.680533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.680553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 Running I/O for 5 seconds... 00:08:29.926 [2024-11-20 15:17:33.697296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.697318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.712974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.712994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.722678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.722698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.737672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.737693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.749261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.749281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.764056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.764077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.778190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.778210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.792162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.792183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.805877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.805897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.820326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.820347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.926 [2024-11-20 15:17:33.831360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.926 [2024-11-20 15:17:33.831381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.845610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.845630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.858818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.858839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.873659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.873680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.889495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.889515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.903680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.903701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.917389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.917409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.931336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.931359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.945876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.945897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.956777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.956798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.970647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.970668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.984590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.984611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:33.998493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:33.998514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.012543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.012564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.026581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.026601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.040932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.040960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.051992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.052011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.066547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.066566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.186 [2024-11-20 15:17:34.080012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.186 [2024-11-20 15:17:34.080031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.094314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.094334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.107313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.107332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.117092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.117111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.131567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.131586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.144154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.144174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.158708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.158728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.172754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.172775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.186537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.186557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.200456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.200475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.214794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.214813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.228472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.228492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.242849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.242869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.256645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.256665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.270846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.270866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.285067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.285089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.299040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.299060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.313268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.313288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.322706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.322732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.336801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.336820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.446 [2024-11-20 15:17:34.350939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.446 [2024-11-20 15:17:34.350964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.362302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.362321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.376960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.376981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.390203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.390223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.404775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.404794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.416017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.416035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.430689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.430708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.444534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.444554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.458644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.458663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.472906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.472926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.483728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.483747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.498247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.498268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.512292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.512311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.526749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.526769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.536270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.536289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.550747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.550766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.564830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.564849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.579395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.579418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.594704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.594723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.706 [2024-11-20 15:17:34.609433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.706 [2024-11-20 15:17:34.609453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.625226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.625246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.639903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.639922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.651108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.651128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.665557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.665577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.679705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.679724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 16394.00 IOPS, 128.08 MiB/s [2024-11-20T14:17:34.874Z] [2024-11-20 15:17:34.693782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.693802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.708093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.708113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.722386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.722405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.732965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.732985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.747446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.747466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.761486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.761506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.775761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.775782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.789646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.789666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.803675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.803694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.817810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.817829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.831669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.831689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.845945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.845977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.966 [2024-11-20 15:17:34.861650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.966 [2024-11-20 15:17:34.861669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.876066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.876086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.890383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.890403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.904755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.904775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.918962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.918982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.933181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.933200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.944426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.944445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.953901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.953920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.969081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.969101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.984559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.984579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:34.998610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:34.998629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:35.009225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:35.009244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:35.023766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:35.023787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:35.037711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:35.037732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.225 [2024-11-20 15:17:35.051940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.225 [2024-11-20 15:17:35.051967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.066425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.066446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.077415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.077435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.092202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.092222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.103249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.103269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.118146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.118166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.226 [2024-11-20 15:17:35.129119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.226 [2024-11-20 15:17:35.129139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.138799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.138820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.153295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.153314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.166707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.166737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.180954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.180975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.195137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.195157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.205917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.205938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.220182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.220202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.233552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.233572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.247838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.247860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.259059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.259080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.273502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.273523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.287465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.287485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.298789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.298809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.313428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.313449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.327554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.327575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.336760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.336779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.351261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.351282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.364969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.364989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.485 [2024-11-20 15:17:35.379211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.485 [2024-11-20 15:17:35.379232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.393077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.393097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.407375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.407396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.421403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.421424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.436149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.436168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.451611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.451632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.465552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.465571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.479524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.479543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.493585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.493605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.507681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.507702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.522081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.522101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.536034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.536053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.549723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.549743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.564089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.564109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.575431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.575452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.590117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.590138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.601168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.601187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.615804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.615824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.629428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.629448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.744 [2024-11-20 15:17:35.643627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.744 [2024-11-20 15:17:35.643646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.658471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.658490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.673607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.673626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.688232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.688251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 16478.50 IOPS, 128.74 MiB/s [2024-11-20T14:17:35.932Z] [2024-11-20 15:17:35.702185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.702204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.716496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.716515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.730530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.730550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.744817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.744837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.759069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.024 [2024-11-20 15:17:35.759089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.024 [2024-11-20 15:17:35.773009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.773029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.787173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.787192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.801288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.801308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.815088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.815107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.829348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.829367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.843764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.843783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.858921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.858940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.873529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.873553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.887662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.887682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.901633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.901652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.025 [2024-11-20 15:17:35.916399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.025 [2024-11-20 15:17:35.916419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:35.931857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:35.931877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:35.946023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:35.946043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:35.959772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:35.959791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:35.974418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:35.974437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:35.989889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:35.989909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.003986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.004005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.015236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.015256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.029678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.029698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.043841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.043860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.054909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.054928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.069578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.069597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.083118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.083138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.096987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.097005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.111273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.111293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.125473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.125492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.139892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.139916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.154433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.356 [2024-11-20 15:17:36.154454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.356 [2024-11-20 15:17:36.165201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.357 [2024-11-20 15:17:36.165220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.357 [2024-11-20 15:17:36.174868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.357 [2024-11-20 15:17:36.174887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.357 [2024-11-20 15:17:36.189885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.357 [2024-11-20 15:17:36.189904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.357 [2024-11-20 15:17:36.204788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.357 [2024-11-20 15:17:36.204808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.357 [2024-11-20 15:17:36.219176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.357 [2024-11-20 15:17:36.219196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.230400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.230419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.244929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.244955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.259098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.259118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.273459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.273479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.287434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.287453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.301377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.301397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.315730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.315752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.326563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.326584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.336079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.336099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.350686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.350706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.363852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.363871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.378489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.378510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.389595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.389619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.404242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.404263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.414953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.414973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.429763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.429785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.443815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.443837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.457915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.457937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.468791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.468812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.483045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.483067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.496647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.496667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.617 [2024-11-20 15:17:36.510807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.617 [2024-11-20 15:17:36.510829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.525184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.525205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.539580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.539600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.553597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.553617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.567671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.567691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.581344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.581364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.596047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.596067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.611562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.611582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.625438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.625459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.639593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.639613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.653923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.653954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.664646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.664666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.674245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.674265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.688915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.688935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 16468.33 IOPS, 128.66 MiB/s [2024-11-20T14:17:36.786Z] [2024-11-20 15:17:36.703251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.703271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.717373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.717393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.728490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.728510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.738215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.738235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.752653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.752674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.767055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.767075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.878 [2024-11-20 15:17:36.778029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.878 [2024-11-20 15:17:36.778049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.792974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.792993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.803769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.803788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.817833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.817852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.831896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.831916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.845782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.845801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.859268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.859288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.873601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.873621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.887692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.887711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.901749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.901768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.916205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.916223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.931765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.931785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.945630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.945650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.960049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.960068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.973904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.973924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.988382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.988401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:36.999258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:36.999278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:37.013489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:37.013508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:37.027449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:37.027468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.138 [2024-11-20 15:17:37.041932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.138 [2024-11-20 15:17:37.041958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.057292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.057312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.071716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.071735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.082808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.082827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.097288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.097307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.108091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.108111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.122826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.122846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.134094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.134113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.148748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.148767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.159382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.159402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.174048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.174068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.189330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.189350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.203624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.203643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.217413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.217432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.231002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.231021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.244921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.244940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.259450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.259469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.274368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.274388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.288742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.288762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.398 [2024-11-20 15:17:37.303361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.398 [2024-11-20 15:17:37.303380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.314710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.314730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.329384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.329403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.343138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.343160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.357151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.357172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.372023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.372043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.388173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.388193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.402314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.402333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.416539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.416559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.427452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.427471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.442050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.442068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.456483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.456502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.470836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.470856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.485463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.485482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.496238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.496257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.510401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.510420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.524721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.524741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.535404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.535423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.550318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.550337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.657 [2024-11-20 15:17:37.561397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.657 [2024-11-20 15:17:37.561416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.916 [2024-11-20 15:17:37.575973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.916 [2024-11-20 15:17:37.575993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.916 [2024-11-20 15:17:37.587022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.916 [2024-11-20 15:17:37.587042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.916 [2024-11-20 15:17:37.601542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.916 [2024-11-20 15:17:37.601563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.615546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.615566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.629511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.629531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.640722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.640742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.655321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.655340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.669479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.669503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.683825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.683844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 16488.00 IOPS, 128.81 MiB/s [2024-11-20T14:17:37.825Z] [2024-11-20 15:17:37.697549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.697569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.712156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.712175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.722966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.722985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.737802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.737820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.749037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.749056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.763973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.763993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.775539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.775560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.790120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.790140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.804454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.804473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.917 [2024-11-20 15:17:37.814956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.917 [2024-11-20 15:17:37.814976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.829557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.829578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.843682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.843703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.853504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.853524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.863308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.863382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.878294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.878314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.889603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.889623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.903891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.903910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.917636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.917660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.931998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.932018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.946404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.946423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.957837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.957857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.972343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.972363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:37.986044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:37.986063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.000506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.000527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.014514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.014535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.028721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.028741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.042709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.042729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.057209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.057229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.177 [2024-11-20 15:17:38.071107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.177 [2024-11-20 15:17:38.071126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.085326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.085346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.099067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.099087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.112840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.112860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.127466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.127486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.138546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.138564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.153037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.153057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.166551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.166571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.181189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.181213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.195696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.195714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.206450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.206469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.215990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.216008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.226201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.226219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.241002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.241022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.255583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.255602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.266177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.266196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.280901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.280921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.294558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.294577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.309405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.309425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.320708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.320729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.437 [2024-11-20 15:17:38.335212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.437 [2024-11-20 15:17:38.335232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.349329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.349350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.360428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.360448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.374783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.374803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.388650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.388670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.402819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.402839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.416860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.416879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.431558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.431577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.447211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.447230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.461265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.461285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.475592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.696 [2024-11-20 15:17:38.475612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.696 [2024-11-20 15:17:38.489995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.490014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.504231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.504250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.514940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.514965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.524590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.524610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.539186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.539216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.553243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.553263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.567426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.567445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.578214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.578234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.697 [2024-11-20 15:17:38.592658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.697 [2024-11-20 15:17:38.592677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.605914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.605933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.620261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.620281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.634026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.634046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.648492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.648512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.662868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.662888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.677321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.677341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.687850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.687870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 16479.80 IOPS, 128.75 MiB/s [2024-11-20T14:17:38.864Z] [2024-11-20 15:17:38.701819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.701839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 00:08:34.956 Latency(us) 00:08:34.956 [2024-11-20T14:17:38.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.956 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:34.956 Nvme1n1 : 5.01 16480.24 128.75 0.00 0.00 7759.06 3561.74 18008.15 00:08:34.956 [2024-11-20T14:17:38.864Z] =================================================================================================================== 00:08:34.956 [2024-11-20T14:17:38.864Z] Total : 16480.24 128.75 0.00 0.00 7759.06 3561.74 18008.15 00:08:34.956 [2024-11-20 15:17:38.710488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.710506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.722520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.722536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.734568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.734585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.746590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.746609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.758618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.758634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.770645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.770659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.782678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.782693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.794709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.794723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.806741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.806755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.818769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.818779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.830803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.830815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.842836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.842847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 [2024-11-20 15:17:38.854865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.956 [2024-11-20 15:17:38.854874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2038618) - No such process 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2038618 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.216 delay0 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.216 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:35.216 [2024-11-20 15:17:39.006581] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:41.782 Initializing NVMe Controllers 00:08:41.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:41.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:41.782 Initialization complete. Launching workers. 00:08:41.782 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 815 00:08:41.782 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1099, failed to submit 36 00:08:41.782 success 903, unsuccessful 196, failed 0 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:41.782 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.783 rmmod nvme_tcp 00:08:41.783 rmmod nvme_fabrics 00:08:41.783 rmmod nvme_keyring 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2036760 ']' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2036760 ']' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2036760' 00:08:41.783 killing process with pid 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2036760 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.783 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.323 00:08:44.323 real 0m31.484s 00:08:44.323 user 0m41.994s 00:08:44.323 sys 0m11.215s 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.323 ************************************ 00:08:44.323 END TEST nvmf_zcopy 00:08:44.323 ************************************ 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.323 ************************************ 00:08:44.323 START TEST nvmf_nmic 00:08:44.323 ************************************ 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:44.323 * Looking for test storage... 00:08:44.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.323 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.324 --rc genhtml_branch_coverage=1 00:08:44.324 --rc genhtml_function_coverage=1 00:08:44.324 --rc genhtml_legend=1 00:08:44.324 --rc geninfo_all_blocks=1 00:08:44.324 --rc geninfo_unexecuted_blocks=1 00:08:44.324 00:08:44.324 ' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.324 --rc genhtml_branch_coverage=1 00:08:44.324 --rc genhtml_function_coverage=1 00:08:44.324 --rc genhtml_legend=1 00:08:44.324 --rc geninfo_all_blocks=1 00:08:44.324 --rc geninfo_unexecuted_blocks=1 00:08:44.324 00:08:44.324 ' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.324 --rc genhtml_branch_coverage=1 00:08:44.324 --rc genhtml_function_coverage=1 00:08:44.324 --rc genhtml_legend=1 00:08:44.324 --rc geninfo_all_blocks=1 00:08:44.324 --rc geninfo_unexecuted_blocks=1 00:08:44.324 00:08:44.324 ' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.324 --rc genhtml_branch_coverage=1 00:08:44.324 --rc genhtml_function_coverage=1 00:08:44.324 --rc genhtml_legend=1 00:08:44.324 --rc geninfo_all_blocks=1 00:08:44.324 --rc geninfo_unexecuted_blocks=1 00:08:44.324 00:08:44.324 ' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.324 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.325 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:50.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:50.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.924 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:50.925 Found net devices under 0000:86:00.0: cvl_0_0 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:50.925 Found net devices under 0000:86:00.1: cvl_0_1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:08:50.925 00:08:50.925 --- 10.0.0.2 ping statistics --- 00:08:50.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.925 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:50.925 00:08:50.925 --- 10.0.0.1 ping statistics --- 00:08:50.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.925 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2044219 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2044219 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2044219 ']' 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.925 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 [2024-11-20 15:17:54.012801] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:08:50.925 [2024-11-20 15:17:54.012855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.925 [2024-11-20 15:17:54.094280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.925 [2024-11-20 15:17:54.138430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.925 [2024-11-20 15:17:54.138469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.925 [2024-11-20 15:17:54.138476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.925 [2024-11-20 15:17:54.138482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.925 [2024-11-20 15:17:54.138488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.925 [2024-11-20 15:17:54.139932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.925 [2024-11-20 15:17:54.140044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.925 [2024-11-20 15:17:54.140059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.925 [2024-11-20 15:17:54.140062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 [2024-11-20 15:17:54.289793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.925 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 Malloc0 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 [2024-11-20 15:17:54.353461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:50.926 test case1: single bdev can't be used in multiple subsystems 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 [2024-11-20 15:17:54.389376] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:50.926 [2024-11-20 15:17:54.389395] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:50.926 [2024-11-20 15:17:54.389402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.926 request: 00:08:50.926 { 00:08:50.926 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.926 "namespace": { 00:08:50.926 "bdev_name": "Malloc0", 00:08:50.926 "no_auto_visible": false 00:08:50.926 }, 00:08:50.926 "method": "nvmf_subsystem_add_ns", 00:08:50.926 "req_id": 1 00:08:50.926 } 00:08:50.926 Got JSON-RPC error response 00:08:50.926 response: 00:08:50.926 { 00:08:50.926 "code": -32602, 00:08:50.926 "message": "Invalid parameters" 00:08:50.926 } 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:50.926 Adding namespace failed - expected result. 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:50.926 test case2: host connect to nvmf target in multiple paths 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.926 [2024-11-20 15:17:54.401514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.926 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.861 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:52.795 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.795 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:52.795 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.795 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:52.795 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:55.327 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:55.327 [global] 00:08:55.327 thread=1 00:08:55.327 invalidate=1 00:08:55.327 rw=write 00:08:55.327 time_based=1 00:08:55.327 runtime=1 00:08:55.327 ioengine=libaio 00:08:55.327 direct=1 00:08:55.327 bs=4096 00:08:55.327 iodepth=1 00:08:55.327 norandommap=0 00:08:55.327 numjobs=1 00:08:55.327 00:08:55.327 verify_dump=1 00:08:55.327 verify_backlog=512 00:08:55.327 verify_state_save=0 00:08:55.327 do_verify=1 00:08:55.327 verify=crc32c-intel 00:08:55.327 [job0] 00:08:55.327 filename=/dev/nvme0n1 00:08:55.327 Could not set queue depth (nvme0n1) 00:08:55.327 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.327 fio-3.35 00:08:55.327 Starting 1 thread 00:08:56.262 00:08:56.262 job0: (groupid=0, jobs=1): err= 0: pid=2045253: Wed Nov 20 15:18:00 2024 00:08:56.262 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:08:56.262 slat (nsec): min=9571, max=42460, avg=22140.61, stdev=5120.22 00:08:56.262 clat (usec): min=40828, max=42028, avg=41004.40, stdev=227.94 00:08:56.262 lat (usec): min=40838, max=42070, avg=41026.54, stdev=232.59 00:08:56.262 clat percentiles (usec): 00:08:56.262 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:56.262 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:56.262 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:56.262 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.262 | 99.99th=[42206] 00:08:56.262 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:08:56.263 slat (nsec): min=9925, max=40583, avg=10991.63, stdev=1698.29 00:08:56.263 clat (usec): min=116, max=326, avg=151.29, stdev=20.12 00:08:56.263 lat (usec): min=126, max=367, avg=162.28, stdev=20.81 00:08:56.263 clat percentiles (usec): 00:08:56.263 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 129], 00:08:56.263 | 30.00th=[ 135], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:08:56.263 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 176], 00:08:56.263 | 99.00th=[ 182], 99.50th=[ 239], 99.90th=[ 326], 99.95th=[ 326], 00:08:56.263 | 99.99th=[ 326] 00:08:56.263 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:56.263 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:56.263 lat (usec) : 250=95.51%, 500=0.19% 00:08:56.263 lat (msec) : 50=4.30% 00:08:56.263 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:08:56.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.263 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.263 00:08:56.263 Run status group 0 (all jobs): 00:08:56.263 READ: bw=89.5KiB/s (91.6kB/s), 89.5KiB/s-89.5KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1028-1028msec 00:08:56.263 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:08:56.263 00:08:56.263 Disk stats (read/write): 00:08:56.263 nvme0n1: ios=69/512, merge=0/0, ticks=793/76, in_queue=869, util=91.28% 00:08:56.263 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.521 rmmod nvme_tcp 00:08:56.521 rmmod nvme_fabrics 00:08:56.521 rmmod nvme_keyring 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2044219 ']' 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2044219 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2044219 ']' 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2044219 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.521 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044219 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044219' 00:08:56.781 killing process with pid 2044219 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2044219 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2044219 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.781 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.349 00:08:59.349 real 0m14.944s 00:08:59.349 user 0m32.947s 00:08:59.349 sys 0m5.308s 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.349 ************************************ 00:08:59.349 END TEST nvmf_nmic 00:08:59.349 ************************************ 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.349 ************************************ 00:08:59.349 START TEST nvmf_fio_target 00:08:59.349 ************************************ 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:59.349 * Looking for test storage... 00:08:59.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.349 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.350 --rc genhtml_branch_coverage=1 00:08:59.350 --rc genhtml_function_coverage=1 00:08:59.350 --rc genhtml_legend=1 00:08:59.350 --rc geninfo_all_blocks=1 00:08:59.350 --rc geninfo_unexecuted_blocks=1 00:08:59.350 00:08:59.350 ' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.350 --rc genhtml_branch_coverage=1 00:08:59.350 --rc genhtml_function_coverage=1 00:08:59.350 --rc genhtml_legend=1 00:08:59.350 --rc geninfo_all_blocks=1 00:08:59.350 --rc geninfo_unexecuted_blocks=1 00:08:59.350 00:08:59.350 ' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.350 --rc genhtml_branch_coverage=1 00:08:59.350 --rc genhtml_function_coverage=1 00:08:59.350 --rc genhtml_legend=1 00:08:59.350 --rc geninfo_all_blocks=1 00:08:59.350 --rc geninfo_unexecuted_blocks=1 00:08:59.350 00:08:59.350 ' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.350 --rc genhtml_branch_coverage=1 00:08:59.350 --rc genhtml_function_coverage=1 00:08:59.350 --rc genhtml_legend=1 00:08:59.350 --rc geninfo_all_blocks=1 00:08:59.350 --rc geninfo_unexecuted_blocks=1 00:08:59.350 00:08:59.350 ' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.350 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.350 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.927 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.928 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.928 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:09:05.928 00:09:05.928 --- 10.0.0.2 ping statistics --- 00:09:05.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.928 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:05.928 00:09:05.928 --- 10.0.0.1 ping statistics --- 00:09:05.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.928 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2049365 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2049365 00:09:05.928 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2049365 ']' 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.928 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.928 [2024-11-20 15:18:09.054955] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:09:05.928 [2024-11-20 15:18:09.055008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.928 [2024-11-20 15:18:09.135718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.928 [2024-11-20 15:18:09.178677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.928 [2024-11-20 15:18:09.178713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.928 [2024-11-20 15:18:09.178721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.928 [2024-11-20 15:18:09.178730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.928 [2024-11-20 15:18:09.178735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.928 [2024-11-20 15:18:09.180184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.929 [2024-11-20 15:18:09.180295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.929 [2024-11-20 15:18:09.180403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.929 [2024-11-20 15:18:09.180404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.929 [2024-11-20 15:18:09.490620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:05.929 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.188 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:06.188 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.447 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:06.447 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.706 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:06.706 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:06.706 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.965 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:06.965 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.224 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:07.224 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.483 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:07.483 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:07.742 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.742 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:07.742 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.001 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:08.001 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.260 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.520 [2024-11-20 15:18:12.181815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.520 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:08.520 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:08.779 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:10.157 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:12.063 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:12.063 [global] 00:09:12.063 thread=1 00:09:12.063 invalidate=1 00:09:12.063 rw=write 00:09:12.063 time_based=1 00:09:12.063 runtime=1 00:09:12.063 ioengine=libaio 00:09:12.063 direct=1 00:09:12.063 bs=4096 00:09:12.063 iodepth=1 00:09:12.063 norandommap=0 00:09:12.063 numjobs=1 00:09:12.063 00:09:12.063 verify_dump=1 00:09:12.063 verify_backlog=512 00:09:12.063 verify_state_save=0 00:09:12.063 do_verify=1 00:09:12.063 verify=crc32c-intel 00:09:12.063 [job0] 00:09:12.063 filename=/dev/nvme0n1 00:09:12.063 [job1] 00:09:12.063 filename=/dev/nvme0n2 00:09:12.063 [job2] 00:09:12.063 filename=/dev/nvme0n3 00:09:12.063 [job3] 00:09:12.063 filename=/dev/nvme0n4 00:09:12.063 Could not set queue depth (nvme0n1) 00:09:12.063 Could not set queue depth (nvme0n2) 00:09:12.063 Could not set queue depth (nvme0n3) 00:09:12.063 Could not set queue depth (nvme0n4) 00:09:12.321 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.321 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.321 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.321 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.321 fio-3.35 00:09:12.321 Starting 4 threads 00:09:13.698 00:09:13.698 job0: (groupid=0, jobs=1): err= 0: pid=2050919: Wed Nov 20 15:18:17 2024 00:09:13.698 read: IOPS=509, BW=2037KiB/s (2086kB/s)(2080KiB/1021msec) 00:09:13.698 slat (nsec): min=6472, max=24649, avg=8951.96, stdev=2323.52 00:09:13.698 clat (usec): min=185, max=41982, avg=1559.89, stdev=7046.01 00:09:13.698 lat (usec): min=192, max=42000, avg=1568.85, stdev=7046.79 00:09:13.698 clat percentiles (usec): 00:09:13.698 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 235], 00:09:13.698 | 30.00th=[ 265], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 334], 00:09:13.698 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 441], 00:09:13.698 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:13.698 | 99.99th=[42206] 00:09:13.698 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:09:13.698 slat (nsec): min=5300, max=38733, avg=11049.47, stdev=2080.61 00:09:13.698 clat (usec): min=131, max=860, avg=185.93, stdev=48.31 00:09:13.698 lat (usec): min=141, max=871, avg=196.97, stdev=48.38 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:13.699 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 184], 00:09:13.699 | 70.00th=[ 194], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 260], 00:09:13.699 | 99.00th=[ 293], 99.50th=[ 404], 99.90th=[ 693], 99.95th=[ 857], 00:09:13.699 | 99.99th=[ 857] 00:09:13.699 bw ( KiB/s): min= 4096, max= 4096, per=15.71%, avg=4096.00, stdev= 0.00, samples=2 00:09:13.699 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:13.699 lat (usec) : 250=69.30%, 500=29.27%, 750=0.32%, 1000=0.06% 00:09:13.699 lat (msec) : 50=1.04% 00:09:13.699 cpu : usr=0.88%, sys=1.47%, ctx=1544, majf=0, minf=1 00:09:13.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.699 job1: (groupid=0, jobs=1): err= 0: pid=2050920: Wed Nov 20 15:18:17 2024 00:09:13.699 read: IOPS=1510, BW=6041KiB/s (6186kB/s)(6168KiB/1021msec) 00:09:13.699 slat (nsec): min=6575, max=26218, avg=8688.78, stdev=1997.48 00:09:13.699 clat (usec): min=189, max=41980, avg=425.88, stdev=2542.55 00:09:13.699 lat (usec): min=197, max=42004, avg=434.57, stdev=2543.37 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 206], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 239], 00:09:13.699 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:09:13.699 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 343], 95.00th=[ 367], 00:09:13.699 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[41157], 99.95th=[42206], 00:09:13.699 | 99.99th=[42206] 00:09:13.699 write: IOPS=2005, BW=8024KiB/s (8216kB/s)(8192KiB/1021msec); 0 zone resets 00:09:13.699 slat (usec): min=9, max=837, avg=11.52, stdev=18.36 00:09:13.699 clat (usec): min=113, max=316, avg=155.65, stdev=29.83 00:09:13.699 lat (usec): min=123, max=1020, avg=167.16, stdev=35.79 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:09:13.699 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 153], 00:09:13.699 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 190], 95.00th=[ 237], 00:09:13.699 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 302], 99.95th=[ 302], 00:09:13.699 | 99.99th=[ 318] 00:09:13.699 bw ( KiB/s): min= 5392, max=10992, per=31.42%, avg=8192.00, stdev=3959.80, samples=2 00:09:13.699 iops : min= 1348, max= 2748, avg=2048.00, stdev=989.95, samples=2 00:09:13.699 lat (usec) : 250=78.36%, 500=21.14%, 750=0.33% 00:09:13.699 lat (msec) : 50=0.17% 00:09:13.699 cpu : usr=2.25%, sys=3.14%, ctx=3595, majf=0, minf=1 00:09:13.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.699 job2: (groupid=0, jobs=1): err= 0: pid=2050921: Wed Nov 20 15:18:17 2024 00:09:13.699 read: IOPS=1269, BW=5079KiB/s (5201kB/s)(5084KiB/1001msec) 00:09:13.699 slat (nsec): min=6668, max=27563, avg=8169.38, stdev=2011.62 00:09:13.699 clat (usec): min=176, max=41543, avg=543.74, stdev=3416.08 00:09:13.699 lat (usec): min=184, max=41552, avg=551.91, stdev=3416.39 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:09:13.699 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:09:13.699 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 338], 95.00th=[ 396], 00:09:13.699 | 99.00th=[ 502], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:13.699 | 99.99th=[41681] 00:09:13.699 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:13.699 slat (nsec): min=9442, max=38972, avg=11100.14, stdev=2008.01 00:09:13.699 clat (usec): min=122, max=972, avg=179.19, stdev=52.42 00:09:13.699 lat (usec): min=132, max=984, avg=190.29, stdev=53.08 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:13.699 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 172], 00:09:13.699 | 70.00th=[ 182], 80.00th=[ 198], 90.00th=[ 241], 95.00th=[ 269], 00:09:13.699 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 635], 99.95th=[ 971], 00:09:13.699 | 99.99th=[ 971] 00:09:13.699 bw ( KiB/s): min= 4096, max= 4096, per=15.71%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.699 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.699 lat (usec) : 250=79.80%, 500=19.56%, 750=0.25%, 1000=0.07% 00:09:13.699 lat (msec) : 50=0.32% 00:09:13.699 cpu : usr=0.80%, sys=3.40%, ctx=2807, majf=0, minf=1 00:09:13.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 issued rwts: total=1271,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.699 job3: (groupid=0, jobs=1): err= 0: pid=2050922: Wed Nov 20 15:18:17 2024 00:09:13.699 read: IOPS=1991, BW=7964KiB/s (8155kB/s)(7972KiB/1001msec) 00:09:13.699 slat (nsec): min=6763, max=25649, avg=7744.57, stdev=1083.18 00:09:13.699 clat (usec): min=188, max=41201, avg=304.50, stdev=1293.30 00:09:13.699 lat (usec): min=196, max=41209, avg=312.25, stdev=1293.33 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:09:13.699 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 269], 00:09:13.699 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 343], 95.00th=[ 367], 00:09:13.699 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[41157], 99.95th=[41157], 00:09:13.699 | 99.99th=[41157] 00:09:13.699 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:13.699 slat (nsec): min=9796, max=38151, avg=10901.88, stdev=1270.17 00:09:13.699 clat (usec): min=122, max=899, avg=169.48, stdev=32.59 00:09:13.699 lat (usec): min=133, max=910, avg=180.38, stdev=32.80 00:09:13.699 clat percentiles (usec): 00:09:13.699 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:13.699 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:09:13.699 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 208], 00:09:13.699 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 627], 99.95th=[ 644], 00:09:13.699 | 99.99th=[ 898] 00:09:13.699 bw ( KiB/s): min= 8456, max= 8456, per=32.43%, avg=8456.00, stdev= 0.00, samples=1 00:09:13.699 iops : min= 2114, max= 2114, avg=2114.00, stdev= 0.00, samples=1 00:09:13.699 lat (usec) : 250=74.46%, 500=25.39%, 750=0.07%, 1000=0.02% 00:09:13.699 lat (msec) : 50=0.05% 00:09:13.699 cpu : usr=2.00%, sys=3.80%, ctx=4042, majf=0, minf=1 00:09:13.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.699 issued rwts: total=1993,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.699 00:09:13.699 Run status group 0 (all jobs): 00:09:13.699 READ: bw=20.4MiB/s (21.4MB/s), 2037KiB/s-7964KiB/s (2086kB/s-8155kB/s), io=20.8MiB (21.8MB), run=1001-1021msec 00:09:13.699 WRITE: bw=25.5MiB/s (26.7MB/s), 4012KiB/s-8184KiB/s (4108kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1021msec 00:09:13.699 00:09:13.699 Disk stats (read/write): 00:09:13.699 nvme0n1: ios=564/1024, merge=0/0, ticks=591/186, in_queue=777, util=82.26% 00:09:13.699 nvme0n2: ios=1592/2048, merge=0/0, ticks=666/301, in_queue=967, util=97.84% 00:09:13.699 nvme0n3: ios=784/1024, merge=0/0, ticks=569/197, in_queue=766, util=87.61% 00:09:13.699 nvme0n4: ios=1593/1690, merge=0/0, ticks=668/285, in_queue=953, util=97.79% 00:09:13.699 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:13.699 [global] 00:09:13.699 thread=1 00:09:13.699 invalidate=1 00:09:13.699 rw=randwrite 00:09:13.699 time_based=1 00:09:13.699 runtime=1 00:09:13.699 ioengine=libaio 00:09:13.699 direct=1 00:09:13.699 bs=4096 00:09:13.699 iodepth=1 00:09:13.699 norandommap=0 00:09:13.699 numjobs=1 00:09:13.699 00:09:13.699 verify_dump=1 00:09:13.699 verify_backlog=512 00:09:13.699 verify_state_save=0 00:09:13.699 do_verify=1 00:09:13.699 verify=crc32c-intel 00:09:13.699 [job0] 00:09:13.699 filename=/dev/nvme0n1 00:09:13.699 [job1] 00:09:13.699 filename=/dev/nvme0n2 00:09:13.699 [job2] 00:09:13.699 filename=/dev/nvme0n3 00:09:13.699 [job3] 00:09:13.699 filename=/dev/nvme0n4 00:09:13.699 Could not set queue depth (nvme0n1) 00:09:13.699 Could not set queue depth (nvme0n2) 00:09:13.699 Could not set queue depth (nvme0n3) 00:09:13.699 Could not set queue depth (nvme0n4) 00:09:13.959 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.959 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.959 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.959 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.959 fio-3.35 00:09:13.959 Starting 4 threads 00:09:15.350 00:09:15.350 job0: (groupid=0, jobs=1): err= 0: pid=2051296: Wed Nov 20 15:18:18 2024 00:09:15.350 read: IOPS=105, BW=421KiB/s (432kB/s)(432KiB/1025msec) 00:09:15.350 slat (nsec): min=2708, max=26345, avg=10521.73, stdev=6678.94 00:09:15.350 clat (usec): min=184, max=41086, avg=8618.03, stdev=16452.67 00:09:15.350 lat (usec): min=192, max=41109, avg=8628.55, stdev=16458.11 00:09:15.350 clat percentiles (usec): 00:09:15.350 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:09:15.350 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:09:15.350 | 70.00th=[ 262], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:09:15.350 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:15.350 | 99.99th=[41157] 00:09:15.350 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:15.350 slat (nsec): min=9553, max=60972, avg=10567.16, stdev=2523.64 00:09:15.350 clat (usec): min=128, max=423, avg=167.95, stdev=25.28 00:09:15.350 lat (usec): min=138, max=484, avg=178.51, stdev=26.39 00:09:15.350 clat percentiles (usec): 00:09:15.350 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:15.350 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:15.350 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 241], 00:09:15.350 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 424], 99.95th=[ 424], 00:09:15.350 | 99.99th=[ 424] 00:09:15.350 bw ( KiB/s): min= 4096, max= 4096, per=17.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:15.350 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:15.350 lat (usec) : 250=93.71%, 500=2.58% 00:09:15.350 lat (msec) : 20=0.16%, 50=3.55% 00:09:15.350 cpu : usr=0.49%, sys=0.39%, ctx=621, majf=0, minf=1 00:09:15.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.350 issued rwts: total=108,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.350 job1: (groupid=0, jobs=1): err= 0: pid=2051297: Wed Nov 20 15:18:18 2024 00:09:15.350 read: IOPS=2309, BW=9239KiB/s (9460kB/s)(9248KiB/1001msec) 00:09:15.350 slat (nsec): min=7223, max=46673, avg=8385.59, stdev=1285.54 00:09:15.350 clat (usec): min=156, max=464, avg=230.44, stdev=33.71 00:09:15.350 lat (usec): min=174, max=474, avg=238.82, stdev=33.66 00:09:15.350 clat percentiles (usec): 00:09:15.350 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:09:15.350 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 227], 60.00th=[ 239], 00:09:15.350 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 297], 00:09:15.350 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 437], 99.95th=[ 465], 00:09:15.350 | 99.99th=[ 465] 00:09:15.350 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:15.350 slat (nsec): min=10370, max=40653, avg=12022.46, stdev=2066.72 00:09:15.350 clat (usec): min=121, max=600, avg=157.09, stdev=19.48 00:09:15.350 lat (usec): min=132, max=612, avg=169.11, stdev=19.97 00:09:15.350 clat percentiles (usec): 00:09:15.350 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:15.350 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:09:15.350 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:09:15.350 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 396], 99.95th=[ 498], 00:09:15.350 | 99.99th=[ 603] 00:09:15.350 bw ( KiB/s): min=10680, max=10680, per=45.20%, avg=10680.00, stdev= 0.00, samples=1 00:09:15.350 iops : min= 2670, max= 2670, avg=2670.00, stdev= 0.00, samples=1 00:09:15.350 lat (usec) : 250=88.44%, 500=11.54%, 750=0.02% 00:09:15.350 cpu : usr=3.90%, sys=8.10%, ctx=4873, majf=0, minf=1 00:09:15.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.350 issued rwts: total=2312,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.350 job2: (groupid=0, jobs=1): err= 0: pid=2051298: Wed Nov 20 15:18:18 2024 00:09:15.350 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(100KiB/1040msec) 00:09:15.350 slat (nsec): min=6102, max=26065, avg=21720.80, stdev=5484.51 00:09:15.350 clat (usec): min=261, max=42055, avg=37836.65, stdev=11314.12 00:09:15.351 lat (usec): min=286, max=42067, avg=37858.37, stdev=11313.55 00:09:15.351 clat percentiles (usec): 00:09:15.351 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[40633], 20.00th=[40633], 00:09:15.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:15.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:15.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:15.351 | 99.99th=[42206] 00:09:15.351 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:15.351 slat (nsec): min=6632, max=37962, avg=8849.38, stdev=2916.92 00:09:15.351 clat (usec): min=138, max=338, avg=170.62, stdev=19.17 00:09:15.351 lat (usec): min=145, max=346, avg=179.47, stdev=20.43 00:09:15.351 clat percentiles (usec): 00:09:15.351 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:15.351 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:15.351 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:09:15.351 | 99.00th=[ 239], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 338], 00:09:15.351 | 99.99th=[ 338] 00:09:15.351 bw ( KiB/s): min= 4096, max= 4096, per=17.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:15.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:15.351 lat (usec) : 250=94.60%, 500=1.12% 00:09:15.351 lat (msec) : 50=4.28% 00:09:15.351 cpu : usr=0.58%, sys=0.48%, ctx=538, majf=0, minf=1 00:09:15.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.351 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.351 job3: (groupid=0, jobs=1): err= 0: pid=2051299: Wed Nov 20 15:18:18 2024 00:09:15.351 read: IOPS=2275, BW=9103KiB/s (9321kB/s)(9112KiB/1001msec) 00:09:15.351 slat (nsec): min=7378, max=43793, avg=8570.23, stdev=1701.95 00:09:15.351 clat (usec): min=180, max=893, avg=228.10, stdev=23.68 00:09:15.351 lat (usec): min=188, max=901, avg=236.67, stdev=23.66 00:09:15.351 clat percentiles (usec): 00:09:15.351 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:15.351 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:09:15.351 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:09:15.351 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 363], 99.95th=[ 693], 00:09:15.351 | 99.99th=[ 898] 00:09:15.351 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:15.351 slat (nsec): min=10614, max=52246, avg=11856.20, stdev=1967.66 00:09:15.351 clat (usec): min=125, max=372, avg=162.70, stdev=13.59 00:09:15.351 lat (usec): min=136, max=425, avg=174.55, stdev=14.06 00:09:15.351 clat percentiles (usec): 00:09:15.351 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:15.351 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:15.351 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:15.351 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 258], 99.95th=[ 265], 00:09:15.351 | 99.99th=[ 375] 00:09:15.351 bw ( KiB/s): min=11712, max=11712, per=49.56%, avg=11712.00, stdev= 0.00, samples=1 00:09:15.351 iops : min= 2928, max= 2928, avg=2928.00, stdev= 0.00, samples=1 00:09:15.351 lat (usec) : 250=95.23%, 500=4.73%, 750=0.02%, 1000=0.02% 00:09:15.351 cpu : usr=3.50%, sys=8.30%, ctx=4839, majf=0, minf=1 00:09:15.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.351 issued rwts: total=2278,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.351 00:09:15.351 Run status group 0 (all jobs): 00:09:15.351 READ: bw=17.7MiB/s (18.6MB/s), 96.2KiB/s-9239KiB/s (98.5kB/s-9460kB/s), io=18.4MiB (19.3MB), run=1001-1040msec 00:09:15.351 WRITE: bw=23.1MiB/s (24.2MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1040msec 00:09:15.351 00:09:15.351 Disk stats (read/write): 00:09:15.351 nvme0n1: ios=154/512, merge=0/0, ticks=1476/85, in_queue=1561, util=89.98% 00:09:15.351 nvme0n2: ios=2083/2104, merge=0/0, ticks=716/300, in_queue=1016, util=99.39% 00:09:15.351 nvme0n3: ios=77/512, merge=0/0, ticks=975/84, in_queue=1059, util=93.96% 00:09:15.351 nvme0n4: ios=2101/2048, merge=0/0, ticks=1374/311, in_queue=1685, util=98.32% 00:09:15.351 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:15.351 [global] 00:09:15.351 thread=1 00:09:15.351 invalidate=1 00:09:15.351 rw=write 00:09:15.351 time_based=1 00:09:15.351 runtime=1 00:09:15.351 ioengine=libaio 00:09:15.351 direct=1 00:09:15.351 bs=4096 00:09:15.351 iodepth=128 00:09:15.351 norandommap=0 00:09:15.351 numjobs=1 00:09:15.351 00:09:15.351 verify_dump=1 00:09:15.351 verify_backlog=512 00:09:15.351 verify_state_save=0 00:09:15.351 do_verify=1 00:09:15.351 verify=crc32c-intel 00:09:15.351 [job0] 00:09:15.351 filename=/dev/nvme0n1 00:09:15.351 [job1] 00:09:15.351 filename=/dev/nvme0n2 00:09:15.351 [job2] 00:09:15.351 filename=/dev/nvme0n3 00:09:15.351 [job3] 00:09:15.351 filename=/dev/nvme0n4 00:09:15.351 Could not set queue depth (nvme0n1) 00:09:15.351 Could not set queue depth (nvme0n2) 00:09:15.351 Could not set queue depth (nvme0n3) 00:09:15.351 Could not set queue depth (nvme0n4) 00:09:15.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.608 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.608 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.608 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.608 fio-3.35 00:09:15.608 Starting 4 threads 00:09:16.979 00:09:16.979 job0: (groupid=0, jobs=1): err= 0: pid=2051668: Wed Nov 20 15:18:20 2024 00:09:16.979 read: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:09:16.979 slat (nsec): min=1169, max=19438k, avg=111558.24, stdev=837886.10 00:09:16.979 clat (usec): min=2161, max=54962, avg=13253.15, stdev=7928.95 00:09:16.979 lat (usec): min=2430, max=54973, avg=13364.71, stdev=8021.19 00:09:16.979 clat percentiles (usec): 00:09:16.979 | 1.00th=[ 2737], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 8455], 00:09:16.979 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11469], 60.00th=[12518], 00:09:16.979 | 70.00th=[13304], 80.00th=[14615], 90.00th=[21365], 95.00th=[29492], 00:09:16.979 | 99.00th=[47973], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:09:16.979 | 99.99th=[54789] 00:09:16.979 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:16.979 slat (nsec): min=1979, max=24126k, avg=127844.68, stdev=791604.96 00:09:16.979 clat (usec): min=3047, max=72150, avg=18221.56, stdev=13139.47 00:09:16.979 lat (usec): min=3055, max=72157, avg=18349.41, stdev=13205.79 00:09:16.979 clat percentiles (usec): 00:09:16.979 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 8160], 20.00th=[ 8455], 00:09:16.979 | 30.00th=[10421], 40.00th=[12387], 50.00th=[15139], 60.00th=[17171], 00:09:16.979 | 70.00th=[19006], 80.00th=[20317], 90.00th=[40633], 95.00th=[52691], 00:09:16.979 | 99.00th=[61604], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:09:16.979 | 99.99th=[71828] 00:09:16.979 bw ( KiB/s): min=14008, max=18760, per=24.39%, avg=16384.00, stdev=3360.17, samples=2 00:09:16.979 iops : min= 3502, max= 4690, avg=4096.00, stdev=840.04, samples=2 00:09:16.979 lat (msec) : 4=1.30%, 10=29.72%, 20=52.68%, 50=12.68%, 100=3.61% 00:09:16.979 cpu : usr=2.69%, sys=5.08%, ctx=377, majf=0, minf=1 00:09:16.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.979 issued rwts: total=3962,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.979 job1: (groupid=0, jobs=1): err= 0: pid=2051669: Wed Nov 20 15:18:20 2024 00:09:16.979 read: IOPS=4202, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec) 00:09:16.979 slat (nsec): min=1064, max=16660k, avg=105508.46, stdev=756875.55 00:09:16.979 clat (usec): min=3519, max=51192, avg=13256.83, stdev=7323.30 00:09:16.979 lat (usec): min=3525, max=53840, avg=13362.34, stdev=7384.55 00:09:16.979 clat percentiles (usec): 00:09:16.979 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 9241], 00:09:16.979 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11076], 60.00th=[11600], 00:09:16.979 | 70.00th=[13698], 80.00th=[14615], 90.00th=[21365], 95.00th=[27657], 00:09:16.979 | 99.00th=[45876], 99.50th=[45876], 99.90th=[49021], 99.95th=[49021], 00:09:16.979 | 99.99th=[51119] 00:09:16.979 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:16.979 slat (nsec): min=1991, max=21996k, avg=110208.81, stdev=646010.55 00:09:16.979 clat (usec): min=650, max=43808, avg=14830.27, stdev=8695.89 00:09:16.979 lat (usec): min=663, max=46197, avg=14940.48, stdev=8765.34 00:09:16.979 clat percentiles (usec): 00:09:16.979 | 1.00th=[ 2802], 5.00th=[ 5145], 10.00th=[ 6783], 20.00th=[ 8225], 00:09:16.979 | 30.00th=[ 8717], 40.00th=[10028], 50.00th=[11076], 60.00th=[13698], 00:09:16.979 | 70.00th=[19268], 80.00th=[20579], 90.00th=[28181], 95.00th=[33424], 00:09:16.979 | 99.00th=[39060], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:09:16.979 | 99.99th=[43779] 00:09:16.979 bw ( KiB/s): min=16384, max=20480, per=27.44%, avg=18432.00, stdev=2896.31, samples=2 00:09:16.979 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:16.979 lat (usec) : 750=0.10%, 1000=0.06% 00:09:16.979 lat (msec) : 2=0.03%, 4=1.51%, 10=37.32%, 20=43.31%, 50=17.66% 00:09:16.979 lat (msec) : 100=0.01% 00:09:16.979 cpu : usr=2.29%, sys=4.68%, ctx=440, majf=0, minf=1 00:09:16.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:16.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.979 issued rwts: total=4228,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.979 job2: (groupid=0, jobs=1): err= 0: pid=2051670: Wed Nov 20 15:18:20 2024 00:09:16.979 read: IOPS=3244, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1006msec) 00:09:16.979 slat (nsec): min=1102, max=48751k, avg=139883.72, stdev=1227427.40 00:09:16.980 clat (usec): min=1255, max=100262, avg=19024.39, stdev=17458.53 00:09:16.980 lat (msec): min=2, max=100, avg=19.16, stdev=17.52 00:09:16.980 clat percentiles (msec): 00:09:16.980 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:09:16.980 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:09:16.980 | 70.00th=[ 14], 80.00th=[ 20], 90.00th=[ 45], 95.00th=[ 60], 00:09:16.980 | 99.00th=[ 90], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 101], 00:09:16.980 | 99.99th=[ 101] 00:09:16.980 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:16.980 slat (nsec): min=1986, max=41192k, avg=147808.59, stdev=997272.83 00:09:16.980 clat (usec): min=3025, max=75044, avg=15904.24, stdev=11074.49 00:09:16.980 lat (usec): min=3035, max=75053, avg=16052.05, stdev=11200.85 00:09:16.980 clat percentiles (usec): 00:09:16.980 | 1.00th=[ 3261], 5.00th=[ 7373], 10.00th=[ 9503], 20.00th=[10159], 00:09:16.980 | 30.00th=[10945], 40.00th=[11600], 50.00th=[11863], 60.00th=[12911], 00:09:16.980 | 70.00th=[15139], 80.00th=[17695], 90.00th=[28967], 95.00th=[43254], 00:09:16.980 | 99.00th=[58983], 99.50th=[68682], 99.90th=[74974], 99.95th=[74974], 00:09:16.980 | 99.99th=[74974] 00:09:16.980 bw ( KiB/s): min=11712, max=16960, per=21.34%, avg=14336.00, stdev=3710.90, samples=2 00:09:16.980 iops : min= 2928, max= 4240, avg=3584.00, stdev=927.72, samples=2 00:09:16.980 lat (msec) : 2=0.01%, 4=1.59%, 10=13.80%, 20=68.06%, 50=11.55% 00:09:16.980 lat (msec) : 100=4.53%, 250=0.45% 00:09:16.980 cpu : usr=1.59%, sys=2.79%, ctx=461, majf=0, minf=1 00:09:16.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:16.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.980 issued rwts: total=3264,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.980 job3: (groupid=0, jobs=1): err= 0: pid=2051671: Wed Nov 20 15:18:20 2024 00:09:16.980 read: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1006msec) 00:09:16.980 slat (nsec): min=1083, max=19417k, avg=109474.29, stdev=788052.82 00:09:16.980 clat (usec): min=5080, max=35210, avg=13341.78, stdev=5004.36 00:09:16.980 lat (usec): min=5412, max=43968, avg=13451.25, stdev=5056.42 00:09:16.980 clat percentiles (usec): 00:09:16.980 | 1.00th=[ 5604], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10159], 00:09:16.980 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[12387], 00:09:16.980 | 70.00th=[13173], 80.00th=[14484], 90.00th=[21103], 95.00th=[25035], 00:09:16.980 | 99.00th=[31065], 99.50th=[32113], 99.90th=[35390], 99.95th=[35390], 00:09:16.980 | 99.99th=[35390] 00:09:16.980 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:16.980 slat (nsec): min=1910, max=12074k, avg=108097.86, stdev=523038.03 00:09:16.980 clat (usec): min=2431, max=44622, avg=15196.22, stdev=7510.38 00:09:16.980 lat (usec): min=2441, max=44626, avg=15304.32, stdev=7548.04 00:09:16.980 clat percentiles (usec): 00:09:16.980 | 1.00th=[ 4490], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[ 9896], 00:09:16.980 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:09:16.980 | 70.00th=[17171], 80.00th=[20579], 90.00th=[26870], 95.00th=[30540], 00:09:16.980 | 99.00th=[43254], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:09:16.980 | 99.99th=[44827] 00:09:16.980 bw ( KiB/s): min=15760, max=21104, per=27.44%, avg=18432.00, stdev=3778.78, samples=2 00:09:16.980 iops : min= 3940, max= 5276, avg=4608.00, stdev=944.69, samples=2 00:09:16.980 lat (msec) : 4=0.21%, 10=16.14%, 20=67.31%, 50=16.34% 00:09:16.980 cpu : usr=3.38%, sys=3.98%, ctx=545, majf=0, minf=2 00:09:16.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:16.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.980 issued rwts: total=4293,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.980 00:09:16.980 Run status group 0 (all jobs): 00:09:16.980 READ: bw=61.1MiB/s (64.1MB/s), 12.7MiB/s-16.7MiB/s (13.3MB/s-17.5MB/s), io=61.5MiB (64.5MB), run=1005-1006msec 00:09:16.980 WRITE: bw=65.6MiB/s (68.8MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=66.0MiB (69.2MB), run=1005-1006msec 00:09:16.980 00:09:16.980 Disk stats (read/write): 00:09:16.980 nvme0n1: ios=2772/3072, merge=0/0, ticks=20073/29380, in_queue=49453, util=86.87% 00:09:16.980 nvme0n2: ios=3636/3743, merge=0/0, ticks=38942/51436, in_queue=90378, util=90.95% 00:09:16.980 nvme0n3: ios=2967/3072, merge=0/0, ticks=13498/13856, in_queue=27354, util=94.14% 00:09:16.980 nvme0n4: ios=3641/3703, merge=0/0, ticks=28448/29316, in_queue=57764, util=93.82% 00:09:16.980 15:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:16.980 [global] 00:09:16.980 thread=1 00:09:16.980 invalidate=1 00:09:16.980 rw=randwrite 00:09:16.980 time_based=1 00:09:16.980 runtime=1 00:09:16.980 ioengine=libaio 00:09:16.980 direct=1 00:09:16.980 bs=4096 00:09:16.980 iodepth=128 00:09:16.980 norandommap=0 00:09:16.980 numjobs=1 00:09:16.980 00:09:16.980 verify_dump=1 00:09:16.980 verify_backlog=512 00:09:16.980 verify_state_save=0 00:09:16.980 do_verify=1 00:09:16.980 verify=crc32c-intel 00:09:16.980 [job0] 00:09:16.980 filename=/dev/nvme0n1 00:09:16.980 [job1] 00:09:16.980 filename=/dev/nvme0n2 00:09:16.980 [job2] 00:09:16.980 filename=/dev/nvme0n3 00:09:16.980 [job3] 00:09:16.980 filename=/dev/nvme0n4 00:09:16.980 Could not set queue depth (nvme0n1) 00:09:16.980 Could not set queue depth (nvme0n2) 00:09:16.980 Could not set queue depth (nvme0n3) 00:09:16.980 Could not set queue depth (nvme0n4) 00:09:17.237 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.237 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.237 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.237 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.237 fio-3.35 00:09:17.237 Starting 4 threads 00:09:18.608 00:09:18.608 job0: (groupid=0, jobs=1): err= 0: pid=2052045: Wed Nov 20 15:18:22 2024 00:09:18.608 read: IOPS=3815, BW=14.9MiB/s (15.6MB/s)(15.6MiB/1045msec) 00:09:18.608 slat (nsec): min=1113, max=47667k, avg=129478.82, stdev=1011752.15 00:09:18.608 clat (msec): min=6, max=104, avg=17.93, stdev=19.19 00:09:18.608 lat (msec): min=6, max=104, avg=18.06, stdev=19.28 00:09:18.608 clat percentiles (msec): 00:09:18.608 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:09:18.609 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:09:18.609 | 70.00th=[ 12], 80.00th=[ 17], 90.00th=[ 46], 95.00th=[ 59], 00:09:18.609 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 105], 00:09:18.609 | 99.99th=[ 105] 00:09:18.609 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:09:18.609 slat (usec): min=2, max=11858, avg=113.32, stdev=696.80 00:09:18.609 clat (usec): min=6626, max=69140, avg=14785.00, stdev=9560.18 00:09:18.609 lat (usec): min=6637, max=69150, avg=14898.33, stdev=9623.68 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9503], 00:09:18.609 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[11338], 00:09:18.609 | 70.00th=[12518], 80.00th=[17695], 90.00th=[31065], 95.00th=[34341], 00:09:18.609 | 99.00th=[57410], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:09:18.609 | 99.99th=[68682] 00:09:18.609 bw ( KiB/s): min=12288, max=20480, per=23.78%, avg=16384.00, stdev=5792.62, samples=2 00:09:18.609 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:09:18.609 lat (msec) : 10=27.77%, 20=56.39%, 50=11.42%, 100=3.65%, 250=0.77% 00:09:18.609 cpu : usr=2.68%, sys=4.21%, ctx=448, majf=0, minf=1 00:09:18.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.609 issued rwts: total=3987,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.609 job1: (groupid=0, jobs=1): err= 0: pid=2052046: Wed Nov 20 15:18:22 2024 00:09:18.609 read: IOPS=3654, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1002msec) 00:09:18.609 slat (nsec): min=1070, max=43735k, avg=165357.00, stdev=1242119.31 00:09:18.609 clat (usec): min=1402, max=69089, avg=19952.47, stdev=14500.02 00:09:18.609 lat (usec): min=1406, max=69097, avg=20117.82, stdev=14568.42 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 4293], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10421], 00:09:18.609 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12518], 60.00th=[15139], 00:09:18.609 | 70.00th=[22152], 80.00th=[29492], 90.00th=[46924], 95.00th=[53216], 00:09:18.609 | 99.00th=[61604], 99.50th=[64226], 99.90th=[68682], 99.95th=[68682], 00:09:18.609 | 99.99th=[68682] 00:09:18.609 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:09:18.609 slat (nsec): min=1806, max=3993.0k, avg=91793.28, stdev=454225.10 00:09:18.609 clat (usec): min=6454, max=69777, avg=13073.35, stdev=7929.10 00:09:18.609 lat (usec): min=6531, max=69782, avg=13165.14, stdev=7922.95 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:18.609 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:09:18.609 | 70.00th=[12518], 80.00th=[15533], 90.00th=[16057], 95.00th=[19006], 00:09:18.609 | 99.00th=[64226], 99.50th=[65799], 99.90th=[67634], 99.95th=[68682], 00:09:18.609 | 99.99th=[69731] 00:09:18.609 bw ( KiB/s): min=15624, max=15624, per=22.67%, avg=15624.00, stdev= 0.00, samples=1 00:09:18.609 iops : min= 3906, max= 3906, avg=3906.00, stdev= 0.00, samples=1 00:09:18.609 lat (msec) : 2=0.17%, 4=0.05%, 10=21.86%, 20=60.39%, 50=13.23% 00:09:18.609 lat (msec) : 100=4.31% 00:09:18.609 cpu : usr=2.50%, sys=3.30%, ctx=396, majf=0, minf=1 00:09:18.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.609 issued rwts: total=3662,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.609 job2: (groupid=0, jobs=1): err= 0: pid=2052049: Wed Nov 20 15:18:22 2024 00:09:18.609 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:18.609 slat (nsec): min=1137, max=12329k, avg=100435.95, stdev=698949.99 00:09:18.609 clat (usec): min=3557, max=31918, avg=12879.86, stdev=3259.56 00:09:18.609 lat (usec): min=3563, max=31921, avg=12980.30, stdev=3315.61 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10945], 00:09:18.609 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12649], 60.00th=[12911], 00:09:18.609 | 70.00th=[13304], 80.00th=[14353], 90.00th=[16188], 95.00th=[18744], 00:09:18.609 | 99.00th=[25822], 99.50th=[28967], 99.90th=[31851], 99.95th=[31851], 00:09:18.609 | 99.99th=[31851] 00:09:18.609 write: IOPS=4671, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1004msec); 0 zone resets 00:09:18.609 slat (usec): min=2, max=8593, avg=103.20, stdev=578.76 00:09:18.609 clat (usec): min=632, max=32428, avg=14512.76, stdev=7129.13 00:09:18.609 lat (usec): min=639, max=32437, avg=14615.96, stdev=7185.75 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 1876], 5.00th=[ 4686], 10.00th=[ 6390], 20.00th=[ 9110], 00:09:18.609 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13304], 60.00th=[14091], 00:09:18.609 | 70.00th=[15795], 80.00th=[19006], 90.00th=[26608], 95.00th=[30802], 00:09:18.609 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:09:18.609 | 99.99th=[32375] 00:09:18.609 bw ( KiB/s): min=17936, max=18928, per=26.75%, avg=18432.00, stdev=701.45, samples=2 00:09:18.609 iops : min= 4484, max= 4732, avg=4608.00, stdev=175.36, samples=2 00:09:18.609 lat (usec) : 750=0.04% 00:09:18.609 lat (msec) : 2=0.59%, 4=1.59%, 10=14.61%, 20=72.50%, 50=10.67% 00:09:18.609 cpu : usr=3.19%, sys=4.99%, ctx=413, majf=0, minf=1 00:09:18.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:18.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.609 issued rwts: total=4608,4690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.609 job3: (groupid=0, jobs=1): err= 0: pid=2052051: Wed Nov 20 15:18:22 2024 00:09:18.609 read: IOPS=4945, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1003msec) 00:09:18.609 slat (nsec): min=1117, max=9536.8k, avg=88106.32, stdev=532197.87 00:09:18.609 clat (usec): min=395, max=43442, avg=12158.43, stdev=3706.29 00:09:18.609 lat (usec): min=3291, max=48400, avg=12246.54, stdev=3737.68 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 3949], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[10290], 00:09:18.609 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12256], 00:09:18.609 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14222], 95.00th=[16057], 00:09:18.609 | 99.00th=[30540], 99.50th=[36439], 99.90th=[41681], 99.95th=[43254], 00:09:18.609 | 99.99th=[43254] 00:09:18.609 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:18.609 slat (usec): min=2, max=10340, avg=98.15, stdev=560.40 00:09:18.609 clat (usec): min=5469, max=36215, avg=12946.20, stdev=3874.44 00:09:18.609 lat (usec): min=5477, max=36223, avg=13044.35, stdev=3899.70 00:09:18.609 clat percentiles (usec): 00:09:18.609 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10814], 00:09:18.609 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:09:18.609 | 70.00th=[13042], 80.00th=[15008], 90.00th=[18220], 95.00th=[20841], 00:09:18.609 | 99.00th=[27132], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:09:18.609 | 99.99th=[36439] 00:09:18.609 bw ( KiB/s): min=20480, max=20480, per=29.72%, avg=20480.00, stdev= 0.00, samples=2 00:09:18.609 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:18.609 lat (usec) : 500=0.01% 00:09:18.609 lat (msec) : 4=0.52%, 10=14.47%, 20=80.60%, 50=4.40% 00:09:18.609 cpu : usr=2.30%, sys=5.59%, ctx=478, majf=0, minf=1 00:09:18.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:18.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.609 issued rwts: total=4960,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.609 00:09:18.609 Run status group 0 (all jobs): 00:09:18.609 READ: bw=64.4MiB/s (67.5MB/s), 14.3MiB/s-19.3MiB/s (15.0MB/s-20.3MB/s), io=67.3MiB (70.5MB), run=1002-1045msec 00:09:18.609 WRITE: bw=67.3MiB/s (70.6MB/s), 15.3MiB/s-19.9MiB/s (16.1MB/s-20.9MB/s), io=70.3MiB (73.7MB), run=1002-1045msec 00:09:18.609 00:09:18.609 Disk stats (read/write): 00:09:18.609 nvme0n1: ios=3635/3607, merge=0/0, ticks=17632/14906, in_queue=32538, util=94.09% 00:09:18.609 nvme0n2: ios=3123/3343, merge=0/0, ticks=17596/10187, in_queue=27783, util=98.38% 00:09:18.609 nvme0n3: ios=3636/4096, merge=0/0, ticks=38496/49008, in_queue=87504, util=100.00% 00:09:18.609 nvme0n4: ios=4146/4312, merge=0/0, ticks=25114/25047, in_queue=50161, util=99.69% 00:09:18.609 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:18.609 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2052278 00:09:18.609 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:18.609 15:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:18.609 [global] 00:09:18.609 thread=1 00:09:18.609 invalidate=1 00:09:18.609 rw=read 00:09:18.609 time_based=1 00:09:18.609 runtime=10 00:09:18.609 ioengine=libaio 00:09:18.609 direct=1 00:09:18.609 bs=4096 00:09:18.609 iodepth=1 00:09:18.609 norandommap=1 00:09:18.609 numjobs=1 00:09:18.609 00:09:18.609 [job0] 00:09:18.609 filename=/dev/nvme0n1 00:09:18.609 [job1] 00:09:18.609 filename=/dev/nvme0n2 00:09:18.609 [job2] 00:09:18.609 filename=/dev/nvme0n3 00:09:18.609 [job3] 00:09:18.609 filename=/dev/nvme0n4 00:09:18.609 Could not set queue depth (nvme0n1) 00:09:18.609 Could not set queue depth (nvme0n2) 00:09:18.609 Could not set queue depth (nvme0n3) 00:09:18.609 Could not set queue depth (nvme0n4) 00:09:18.866 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.866 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.866 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.866 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.866 fio-3.35 00:09:18.866 Starting 4 threads 00:09:21.391 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:21.648 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=696320, buflen=4096 00:09:21.648 fio: pid=2052540, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.648 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:21.906 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.906 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:21.906 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1241088, buflen=4096 00:09:21.906 fio: pid=2052534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.906 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62066688, buflen=4096 00:09:21.906 fio: pid=2052492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.163 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.163 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:22.163 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.163 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:22.163 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7024640, buflen=4096 00:09:22.163 fio: pid=2052512, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.420 00:09:22.420 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2052492: Wed Nov 20 15:18:26 2024 00:09:22.420 read: IOPS=4832, BW=18.9MiB/s (19.8MB/s)(59.2MiB/3136msec) 00:09:22.420 slat (usec): min=6, max=14696, avg= 9.64, stdev=158.44 00:09:22.420 clat (usec): min=145, max=673, avg=194.80, stdev=23.19 00:09:22.420 lat (usec): min=159, max=15000, avg=204.43, stdev=161.33 00:09:22.420 clat percentiles (usec): 00:09:22.420 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:09:22.420 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 194], 00:09:22.420 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 241], 00:09:22.420 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 371], 00:09:22.420 | 99.99th=[ 586] 00:09:22.420 bw ( KiB/s): min=18168, max=20344, per=95.16%, avg=19420.67, stdev=865.38, samples=6 00:09:22.420 iops : min= 4542, max= 5086, avg=4855.17, stdev=216.34, samples=6 00:09:22.420 lat (usec) : 250=97.20%, 500=2.77%, 750=0.03% 00:09:22.420 cpu : usr=2.30%, sys=7.81%, ctx=15156, majf=0, minf=2 00:09:22.420 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.420 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.420 issued rwts: total=15154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.420 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.420 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2052512: Wed Nov 20 15:18:26 2024 00:09:22.420 read: IOPS=504, BW=2018KiB/s (2067kB/s)(6860KiB/3399msec) 00:09:22.420 slat (usec): min=6, max=15526, avg=32.79, stdev=577.55 00:09:22.420 clat (usec): min=198, max=42244, avg=1932.68, stdev=8149.23 00:09:22.421 lat (usec): min=205, max=57770, avg=1965.47, stdev=8287.46 00:09:22.421 clat percentiles (usec): 00:09:22.421 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:09:22.421 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:09:22.421 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 289], 00:09:22.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:22.421 | 99.99th=[42206] 00:09:22.421 bw ( KiB/s): min= 93, max= 9856, per=11.13%, avg=2272.83, stdev=3939.15, samples=6 00:09:22.421 iops : min= 23, max= 2464, avg=568.17, stdev=984.82, samples=6 00:09:22.421 lat (usec) : 250=89.74%, 500=5.89%, 750=0.12% 00:09:22.421 lat (msec) : 20=0.06%, 50=4.14% 00:09:22.421 cpu : usr=0.21%, sys=0.41%, ctx=1722, majf=0, minf=2 00:09:22.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.421 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2052534: Wed Nov 20 15:18:26 2024 00:09:22.421 read: IOPS=102, BW=410KiB/s (420kB/s)(1212KiB/2953msec) 00:09:22.421 slat (nsec): min=6335, max=26722, avg=11229.43, stdev=6286.22 00:09:22.421 clat (usec): min=193, max=41636, avg=9662.63, stdev=17177.07 00:09:22.421 lat (usec): min=203, max=41646, avg=9673.82, stdev=17176.80 00:09:22.421 clat percentiles (usec): 00:09:22.421 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:09:22.421 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 269], 00:09:22.421 | 70.00th=[ 351], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:09:22.421 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:22.421 | 99.99th=[41681] 00:09:22.421 bw ( KiB/s): min= 224, max= 400, per=1.44%, avg=294.40, stdev=71.87, samples=5 00:09:22.421 iops : min= 56, max= 100, avg=73.60, stdev=17.97, samples=5 00:09:22.421 lat (usec) : 250=48.36%, 500=26.97%, 750=1.32% 00:09:22.421 lat (msec) : 50=23.03% 00:09:22.421 cpu : usr=0.07%, sys=0.07%, ctx=305, majf=0, minf=1 00:09:22.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.421 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2052540: Wed Nov 20 15:18:26 2024 00:09:22.421 read: IOPS=63, BW=251KiB/s (257kB/s)(680KiB/2705msec) 00:09:22.421 slat (nsec): min=7011, max=30038, avg=14384.95, stdev=7372.90 00:09:22.421 clat (usec): min=205, max=42425, avg=15836.67, stdev=19878.41 00:09:22.421 lat (usec): min=214, max=42433, avg=15851.00, stdev=19876.21 00:09:22.421 clat percentiles (usec): 00:09:22.421 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:09:22.421 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 306], 00:09:22.421 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:22.421 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:22.421 | 99.99th=[42206] 00:09:22.421 bw ( KiB/s): min= 208, max= 304, per=1.22%, avg=248.00, stdev=37.52, samples=5 00:09:22.421 iops : min= 52, max= 76, avg=62.00, stdev= 9.38, samples=5 00:09:22.421 lat (usec) : 250=46.20%, 500=15.20% 00:09:22.421 lat (msec) : 50=38.01% 00:09:22.421 cpu : usr=0.00%, sys=0.15%, ctx=174, majf=0, minf=2 00:09:22.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.421 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.421 00:09:22.421 Run status group 0 (all jobs): 00:09:22.421 READ: bw=19.9MiB/s (20.9MB/s), 251KiB/s-18.9MiB/s (257kB/s-19.8MB/s), io=67.7MiB (71.0MB), run=2705-3399msec 00:09:22.421 00:09:22.421 Disk stats (read/write): 00:09:22.421 nvme0n1: ios=15031/0, merge=0/0, ticks=2760/0, in_queue=2760, util=94.85% 00:09:22.421 nvme0n2: ios=1754/0, merge=0/0, ticks=3453/0, in_queue=3453, util=96.75% 00:09:22.421 nvme0n3: ios=342/0, merge=0/0, ticks=3001/0, in_queue=3001, util=99.80% 00:09:22.421 nvme0n4: ios=203/0, merge=0/0, ticks=3331/0, in_queue=3331, util=99.33% 00:09:22.421 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.421 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:22.678 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.678 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:22.935 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.935 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:23.191 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.191 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:23.191 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:23.191 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2052278 00:09:23.191 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:23.191 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:23.449 nvmf hotplug test: fio failed as expected 00:09:23.449 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.707 rmmod nvme_tcp 00:09:23.707 rmmod nvme_fabrics 00:09:23.707 rmmod nvme_keyring 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2049365 ']' 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2049365 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2049365 ']' 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2049365 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2049365 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2049365' 00:09:23.707 killing process with pid 2049365 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2049365 00:09:23.707 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2049365 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.966 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.967 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.967 15:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.503 00:09:26.503 real 0m27.013s 00:09:26.503 user 1m47.071s 00:09:26.503 sys 0m8.633s 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.503 ************************************ 00:09:26.503 END TEST nvmf_fio_target 00:09:26.503 ************************************ 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.503 ************************************ 00:09:26.503 START TEST nvmf_bdevio 00:09:26.503 ************************************ 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:26.503 * Looking for test storage... 00:09:26.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.503 15:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.503 --rc genhtml_branch_coverage=1 00:09:26.503 --rc genhtml_function_coverage=1 00:09:26.503 --rc genhtml_legend=1 00:09:26.503 --rc geninfo_all_blocks=1 00:09:26.503 --rc geninfo_unexecuted_blocks=1 00:09:26.503 00:09:26.503 ' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.503 --rc genhtml_branch_coverage=1 00:09:26.503 --rc genhtml_function_coverage=1 00:09:26.503 --rc genhtml_legend=1 00:09:26.503 --rc geninfo_all_blocks=1 00:09:26.503 --rc geninfo_unexecuted_blocks=1 00:09:26.503 00:09:26.503 ' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.503 --rc genhtml_branch_coverage=1 00:09:26.503 --rc genhtml_function_coverage=1 00:09:26.503 --rc genhtml_legend=1 00:09:26.503 --rc geninfo_all_blocks=1 00:09:26.503 --rc geninfo_unexecuted_blocks=1 00:09:26.503 00:09:26.503 ' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.503 --rc genhtml_branch_coverage=1 00:09:26.503 --rc genhtml_function_coverage=1 00:09:26.503 --rc genhtml_legend=1 00:09:26.503 --rc geninfo_all_blocks=1 00:09:26.503 --rc geninfo_unexecuted_blocks=1 00:09:26.503 00:09:26.503 ' 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.503 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.504 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:31.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.964 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:31.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:31.965 Found net devices under 0000:86:00.0: cvl_0_0 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:31.965 Found net devices under 0000:86:00.1: cvl_0_1 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.965 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.225 15:18:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:09:32.225 00:09:32.225 --- 10.0.0.2 ping statistics --- 00:09:32.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.225 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:09:32.225 00:09:32.225 --- 10.0.0.1 ping statistics --- 00:09:32.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.225 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2056898 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2056898 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2056898 ']' 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.225 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.226 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.226 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.226 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.226 [2024-11-20 15:18:36.117666] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:09:32.226 [2024-11-20 15:18:36.117710] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.484 [2024-11-20 15:18:36.198020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.484 [2024-11-20 15:18:36.237978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.484 [2024-11-20 15:18:36.238019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.484 [2024-11-20 15:18:36.238027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.484 [2024-11-20 15:18:36.238034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.484 [2024-11-20 15:18:36.238060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.484 [2024-11-20 15:18:36.239714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:32.484 [2024-11-20 15:18:36.239812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.484 [2024-11-20 15:18:36.239903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:32.484 [2024-11-20 15:18:36.239909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.420 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 [2024-11-20 15:18:37.006384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 Malloc0 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.421 [2024-11-20 15:18:37.072535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.421 { 00:09:33.421 "params": { 00:09:33.421 "name": "Nvme$subsystem", 00:09:33.421 "trtype": "$TEST_TRANSPORT", 00:09:33.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.421 "adrfam": "ipv4", 00:09:33.421 "trsvcid": "$NVMF_PORT", 00:09:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.421 "hdgst": ${hdgst:-false}, 00:09:33.421 "ddgst": ${ddgst:-false} 00:09:33.421 }, 00:09:33.421 "method": "bdev_nvme_attach_controller" 00:09:33.421 } 00:09:33.421 EOF 00:09:33.421 )") 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:33.421 15:18:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.421 "params": { 00:09:33.421 "name": "Nvme1", 00:09:33.421 "trtype": "tcp", 00:09:33.421 "traddr": "10.0.0.2", 00:09:33.421 "adrfam": "ipv4", 00:09:33.421 "trsvcid": "4420", 00:09:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.421 "hdgst": false, 00:09:33.421 "ddgst": false 00:09:33.421 }, 00:09:33.421 "method": "bdev_nvme_attach_controller" 00:09:33.421 }' 00:09:33.421 [2024-11-20 15:18:37.124631] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:09:33.421 [2024-11-20 15:18:37.124676] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057148 ] 00:09:33.421 [2024-11-20 15:18:37.200346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.421 [2024-11-20 15:18:37.244402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.421 [2024-11-20 15:18:37.244507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.421 [2024-11-20 15:18:37.244507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.679 I/O targets: 00:09:33.679 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:33.679 00:09:33.679 00:09:33.679 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.679 http://cunit.sourceforge.net/ 00:09:33.679 00:09:33.679 00:09:33.679 Suite: bdevio tests on: Nvme1n1 00:09:33.938 Test: blockdev write read block ...passed 00:09:33.938 Test: blockdev write zeroes read block ...passed 00:09:33.938 Test: blockdev write zeroes read no split ...passed 00:09:33.938 Test: blockdev write zeroes read split ...passed 00:09:33.938 Test: blockdev write zeroes read split partial ...passed 00:09:33.938 Test: blockdev reset ...[2024-11-20 15:18:37.718434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:33.938 [2024-11-20 15:18:37.718496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121d340 (9): Bad file descriptor 00:09:33.938 [2024-11-20 15:18:37.730705] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:33.938 passed 00:09:33.938 Test: blockdev write read 8 blocks ...passed 00:09:33.938 Test: blockdev write read size > 128k ...passed 00:09:33.938 Test: blockdev write read invalid size ...passed 00:09:33.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.938 Test: blockdev write read max offset ...passed 00:09:34.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:34.197 Test: blockdev writev readv 8 blocks ...passed 00:09:34.197 Test: blockdev writev readv 30 x 1block ...passed 00:09:34.197 Test: blockdev writev readv block ...passed 00:09:34.197 Test: blockdev writev readv size > 128k ...passed 00:09:34.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:34.197 Test: blockdev comparev and writev ...[2024-11-20 15:18:37.983060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:37.983919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:34.197 [2024-11-20 15:18:37.983926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:34.197 passed 00:09:34.197 Test: blockdev nvme passthru rw ...passed 00:09:34.197 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:18:38.066269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:34.197 [2024-11-20 15:18:38.066289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:38.066395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:34.197 [2024-11-20 15:18:38.066406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:38.066521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:34.197 [2024-11-20 15:18:38.066534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:34.197 [2024-11-20 15:18:38.066648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:34.197 [2024-11-20 15:18:38.066660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:34.197 passed 00:09:34.197 Test: blockdev nvme admin passthru ...passed 00:09:34.455 Test: blockdev copy ...passed 00:09:34.455 00:09:34.455 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.455 suites 1 1 n/a 0 0 00:09:34.455 tests 23 23 23 0 0 00:09:34.455 asserts 152 152 152 0 n/a 00:09:34.455 00:09:34.455 Elapsed time = 1.121 seconds 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.455 rmmod nvme_tcp 00:09:34.455 rmmod nvme_fabrics 00:09:34.455 rmmod nvme_keyring 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2056898 ']' 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2056898 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2056898 ']' 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2056898 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.455 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056898 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056898' 00:09:34.714 killing process with pid 2056898 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2056898 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2056898 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.714 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.715 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.715 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.715 15:18:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.255 00:09:37.255 real 0m10.761s 00:09:37.255 user 0m13.519s 00:09:37.255 sys 0m5.080s 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.255 ************************************ 00:09:37.255 END TEST nvmf_bdevio 00:09:37.255 ************************************ 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:37.255 00:09:37.255 real 4m36.334s 00:09:37.255 user 10m30.543s 00:09:37.255 sys 1m39.726s 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.255 ************************************ 00:09:37.255 END TEST nvmf_target_core 00:09:37.255 ************************************ 00:09:37.255 15:18:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:37.255 15:18:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.255 15:18:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.255 15:18:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.255 ************************************ 00:09:37.255 START TEST nvmf_target_extra 00:09:37.255 ************************************ 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:37.255 * Looking for test storage... 00:09:37.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.255 --rc genhtml_branch_coverage=1 00:09:37.255 --rc genhtml_function_coverage=1 00:09:37.255 --rc genhtml_legend=1 00:09:37.255 --rc geninfo_all_blocks=1 00:09:37.255 --rc geninfo_unexecuted_blocks=1 00:09:37.255 00:09:37.255 ' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.255 --rc genhtml_branch_coverage=1 00:09:37.255 --rc genhtml_function_coverage=1 00:09:37.255 --rc genhtml_legend=1 00:09:37.255 --rc geninfo_all_blocks=1 00:09:37.255 --rc geninfo_unexecuted_blocks=1 00:09:37.255 00:09:37.255 ' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.255 --rc genhtml_branch_coverage=1 00:09:37.255 --rc genhtml_function_coverage=1 00:09:37.255 --rc genhtml_legend=1 00:09:37.255 --rc geninfo_all_blocks=1 00:09:37.255 --rc geninfo_unexecuted_blocks=1 00:09:37.255 00:09:37.255 ' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.255 --rc genhtml_branch_coverage=1 00:09:37.255 --rc genhtml_function_coverage=1 00:09:37.255 --rc genhtml_legend=1 00:09:37.255 --rc geninfo_all_blocks=1 00:09:37.255 --rc geninfo_unexecuted_blocks=1 00:09:37.255 00:09:37.255 ' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.255 15:18:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 ************************************ 00:09:37.256 START TEST nvmf_example 00:09:37.256 ************************************ 00:09:37.256 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:37.256 * Looking for test storage... 00:09:37.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:37.256 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.516 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.516 --rc genhtml_branch_coverage=1 00:09:37.517 --rc genhtml_function_coverage=1 00:09:37.517 --rc genhtml_legend=1 00:09:37.517 --rc geninfo_all_blocks=1 00:09:37.517 --rc geninfo_unexecuted_blocks=1 00:09:37.517 00:09:37.517 ' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.517 --rc genhtml_branch_coverage=1 00:09:37.517 --rc genhtml_function_coverage=1 00:09:37.517 --rc genhtml_legend=1 00:09:37.517 --rc geninfo_all_blocks=1 00:09:37.517 --rc geninfo_unexecuted_blocks=1 00:09:37.517 00:09:37.517 ' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.517 --rc genhtml_branch_coverage=1 00:09:37.517 --rc genhtml_function_coverage=1 00:09:37.517 --rc genhtml_legend=1 00:09:37.517 --rc geninfo_all_blocks=1 00:09:37.517 --rc geninfo_unexecuted_blocks=1 00:09:37.517 00:09:37.517 ' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.517 --rc genhtml_branch_coverage=1 00:09:37.517 --rc genhtml_function_coverage=1 00:09:37.517 --rc genhtml_legend=1 00:09:37.517 --rc geninfo_all_blocks=1 00:09:37.517 --rc geninfo_unexecuted_blocks=1 00:09:37.517 00:09:37.517 ' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.517 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.092 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:44.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:44.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:44.093 Found net devices under 0000:86:00.0: cvl_0_0 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:44.093 Found net devices under 0000:86:00.1: cvl_0_1 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.093 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:09:44.093 00:09:44.093 --- 10.0.0.2 ping statistics --- 00:09:44.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.093 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:44.093 00:09:44.093 --- 10.0.0.1 ping statistics --- 00:09:44.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.093 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.093 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2060973 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2060973 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2060973 ']' 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.094 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:44.353 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:56.556 Initializing NVMe Controllers 00:09:56.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.556 Initialization complete. Launching workers. 00:09:56.556 ======================================================== 00:09:56.556 Latency(us) 00:09:56.556 Device Information : IOPS MiB/s Average min max 00:09:56.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17947.30 70.11 3566.57 535.93 15575.66 00:09:56.556 ======================================================== 00:09:56.556 Total : 17947.30 70.11 3566.57 535.93 15575.66 00:09:56.556 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.556 rmmod nvme_tcp 00:09:56.556 rmmod nvme_fabrics 00:09:56.556 rmmod nvme_keyring 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.556 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2060973 ']' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2060973 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2060973 ']' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2060973 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2060973 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2060973' 00:09:56.557 killing process with pid 2060973 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2060973 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2060973 00:09:56.557 nvmf threads initialize successfully 00:09:56.557 bdev subsystem init successfully 00:09:56.557 created a nvmf target service 00:09:56.557 create targets's poll groups done 00:09:56.557 all subsystems of target started 00:09:56.557 nvmf target is running 00:09:56.557 all subsystems of target stopped 00:09:56.557 destroy targets's poll groups done 00:09:56.557 destroyed the nvmf target service 00:09:56.557 bdev subsystem finish successfully 00:09:56.557 nvmf threads destroy successfully 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.557 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.126 00:09:57.126 real 0m19.800s 00:09:57.126 user 0m45.884s 00:09:57.126 sys 0m6.121s 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.126 ************************************ 00:09:57.126 END TEST nvmf_example 00:09:57.126 ************************************ 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:57.126 ************************************ 00:09:57.126 START TEST nvmf_filesystem 00:09:57.126 ************************************ 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:57.126 * Looking for test storage... 00:09:57.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.126 15:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.126 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.389 --rc genhtml_branch_coverage=1 00:09:57.389 --rc genhtml_function_coverage=1 00:09:57.389 --rc genhtml_legend=1 00:09:57.389 --rc geninfo_all_blocks=1 00:09:57.389 --rc geninfo_unexecuted_blocks=1 00:09:57.389 00:09:57.389 ' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.389 --rc genhtml_branch_coverage=1 00:09:57.389 --rc genhtml_function_coverage=1 00:09:57.389 --rc genhtml_legend=1 00:09:57.389 --rc geninfo_all_blocks=1 00:09:57.389 --rc geninfo_unexecuted_blocks=1 00:09:57.389 00:09:57.389 ' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.389 --rc genhtml_branch_coverage=1 00:09:57.389 --rc genhtml_function_coverage=1 00:09:57.389 --rc genhtml_legend=1 00:09:57.389 --rc geninfo_all_blocks=1 00:09:57.389 --rc geninfo_unexecuted_blocks=1 00:09:57.389 00:09:57.389 ' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.389 --rc genhtml_branch_coverage=1 00:09:57.389 --rc genhtml_function_coverage=1 00:09:57.389 --rc genhtml_legend=1 00:09:57.389 --rc geninfo_all_blocks=1 00:09:57.389 --rc geninfo_unexecuted_blocks=1 00:09:57.389 00:09:57.389 ' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:57.389 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:57.390 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:57.390 #define SPDK_CONFIG_H 00:09:57.390 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:57.390 #define SPDK_CONFIG_APPS 1 00:09:57.390 #define SPDK_CONFIG_ARCH native 00:09:57.390 #undef SPDK_CONFIG_ASAN 00:09:57.390 #undef SPDK_CONFIG_AVAHI 00:09:57.390 #undef SPDK_CONFIG_CET 00:09:57.390 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:57.390 #define SPDK_CONFIG_COVERAGE 1 00:09:57.390 #define SPDK_CONFIG_CROSS_PREFIX 00:09:57.391 #undef SPDK_CONFIG_CRYPTO 00:09:57.391 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:57.391 #undef SPDK_CONFIG_CUSTOMOCF 00:09:57.391 #undef SPDK_CONFIG_DAOS 00:09:57.391 #define SPDK_CONFIG_DAOS_DIR 00:09:57.391 #define SPDK_CONFIG_DEBUG 1 00:09:57.391 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:57.391 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:57.391 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:57.391 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:57.391 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:57.391 #undef SPDK_CONFIG_DPDK_UADK 00:09:57.391 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:57.391 #define SPDK_CONFIG_EXAMPLES 1 00:09:57.391 #undef SPDK_CONFIG_FC 00:09:57.391 #define SPDK_CONFIG_FC_PATH 00:09:57.391 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:57.391 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:57.391 #define SPDK_CONFIG_FSDEV 1 00:09:57.391 #undef SPDK_CONFIG_FUSE 00:09:57.391 #undef SPDK_CONFIG_FUZZER 00:09:57.391 #define SPDK_CONFIG_FUZZER_LIB 00:09:57.391 #undef SPDK_CONFIG_GOLANG 00:09:57.391 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:57.391 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:57.391 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:57.391 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:57.391 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:57.391 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:57.391 #undef SPDK_CONFIG_HAVE_LZ4 00:09:57.391 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:57.391 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:57.391 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:57.391 #define SPDK_CONFIG_IDXD 1 00:09:57.391 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:57.391 #undef SPDK_CONFIG_IPSEC_MB 00:09:57.391 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:57.391 #define SPDK_CONFIG_ISAL 1 00:09:57.391 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:57.391 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:57.391 #define SPDK_CONFIG_LIBDIR 00:09:57.391 #undef SPDK_CONFIG_LTO 00:09:57.391 #define SPDK_CONFIG_MAX_LCORES 128 00:09:57.391 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:57.391 #define SPDK_CONFIG_NVME_CUSE 1 00:09:57.391 #undef SPDK_CONFIG_OCF 00:09:57.391 #define SPDK_CONFIG_OCF_PATH 00:09:57.391 #define SPDK_CONFIG_OPENSSL_PATH 00:09:57.391 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:57.391 #define SPDK_CONFIG_PGO_DIR 00:09:57.391 #undef SPDK_CONFIG_PGO_USE 00:09:57.391 #define SPDK_CONFIG_PREFIX /usr/local 00:09:57.391 #undef SPDK_CONFIG_RAID5F 00:09:57.391 #undef SPDK_CONFIG_RBD 00:09:57.391 #define SPDK_CONFIG_RDMA 1 00:09:57.391 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:57.391 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:57.391 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:57.391 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:57.391 #define SPDK_CONFIG_SHARED 1 00:09:57.391 #undef SPDK_CONFIG_SMA 00:09:57.391 #define SPDK_CONFIG_TESTS 1 00:09:57.391 #undef SPDK_CONFIG_TSAN 00:09:57.391 #define SPDK_CONFIG_UBLK 1 00:09:57.391 #define SPDK_CONFIG_UBSAN 1 00:09:57.391 #undef SPDK_CONFIG_UNIT_TESTS 00:09:57.391 #undef SPDK_CONFIG_URING 00:09:57.391 #define SPDK_CONFIG_URING_PATH 00:09:57.391 #undef SPDK_CONFIG_URING_ZNS 00:09:57.391 #undef SPDK_CONFIG_USDT 00:09:57.391 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:57.391 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:57.391 #define SPDK_CONFIG_VFIO_USER 1 00:09:57.391 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:57.391 #define SPDK_CONFIG_VHOST 1 00:09:57.391 #define SPDK_CONFIG_VIRTIO 1 00:09:57.391 #undef SPDK_CONFIG_VTUNE 00:09:57.391 #define SPDK_CONFIG_VTUNE_DIR 00:09:57.391 #define SPDK_CONFIG_WERROR 1 00:09:57.391 #define SPDK_CONFIG_WPDK_DIR 00:09:57.391 #undef SPDK_CONFIG_XNVME 00:09:57.391 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:57.391 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:57.392 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:57.393 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2063370 ]] 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2063370 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.8mLItU 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.8mLItU/tests/target /tmp/spdk.8mLItU 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189227827200 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6736134144 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981325312 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=655360 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:57.394 * Looking for test storage... 00:09:57.394 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189227827200 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8950726656 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.395 --rc genhtml_branch_coverage=1 00:09:57.395 --rc genhtml_function_coverage=1 00:09:57.395 --rc genhtml_legend=1 00:09:57.395 --rc geninfo_all_blocks=1 00:09:57.395 --rc geninfo_unexecuted_blocks=1 00:09:57.395 00:09:57.395 ' 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.395 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.654 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.655 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:04.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:04.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:04.229 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:04.230 Found net devices under 0000:86:00.0: cvl_0_0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:04.230 Found net devices under 0000:86:00.1: cvl_0_1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:10:04.230 00:10:04.230 --- 10.0.0.2 ping statistics --- 00:10:04.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.230 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:04.230 00:10:04.230 --- 10.0.0.1 ping statistics --- 00:10:04.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.230 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.230 ************************************ 00:10:04.230 START TEST nvmf_filesystem_no_in_capsule 00:10:04.230 ************************************ 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:04.230 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2066412 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2066412 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2066412 ']' 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 [2024-11-20 15:19:07.393732] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:10:04.231 [2024-11-20 15:19:07.393775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.231 [2024-11-20 15:19:07.474211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.231 [2024-11-20 15:19:07.517011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.231 [2024-11-20 15:19:07.517049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.231 [2024-11-20 15:19:07.517057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.231 [2024-11-20 15:19:07.517063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.231 [2024-11-20 15:19:07.517069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.231 [2024-11-20 15:19:07.518674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.231 [2024-11-20 15:19:07.518784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.231 [2024-11-20 15:19:07.518895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.231 [2024-11-20 15:19:07.518896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 [2024-11-20 15:19:07.656652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 [2024-11-20 15:19:07.808587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:04.231 { 00:10:04.232 "name": "Malloc1", 00:10:04.232 "aliases": [ 00:10:04.232 "ed84adf9-71d7-4e65-813f-9dfd3c07d345" 00:10:04.232 ], 00:10:04.232 "product_name": "Malloc disk", 00:10:04.232 "block_size": 512, 00:10:04.232 "num_blocks": 1048576, 00:10:04.232 "uuid": "ed84adf9-71d7-4e65-813f-9dfd3c07d345", 00:10:04.232 "assigned_rate_limits": { 00:10:04.232 "rw_ios_per_sec": 0, 00:10:04.232 "rw_mbytes_per_sec": 0, 00:10:04.232 "r_mbytes_per_sec": 0, 00:10:04.232 "w_mbytes_per_sec": 0 00:10:04.232 }, 00:10:04.232 "claimed": true, 00:10:04.232 "claim_type": "exclusive_write", 00:10:04.232 "zoned": false, 00:10:04.232 "supported_io_types": { 00:10:04.232 "read": true, 00:10:04.232 "write": true, 00:10:04.232 "unmap": true, 00:10:04.232 "flush": true, 00:10:04.232 "reset": true, 00:10:04.232 "nvme_admin": false, 00:10:04.232 "nvme_io": false, 00:10:04.232 "nvme_io_md": false, 00:10:04.232 "write_zeroes": true, 00:10:04.232 "zcopy": true, 00:10:04.232 "get_zone_info": false, 00:10:04.232 "zone_management": false, 00:10:04.232 "zone_append": false, 00:10:04.232 "compare": false, 00:10:04.232 "compare_and_write": false, 00:10:04.232 "abort": true, 00:10:04.232 "seek_hole": false, 00:10:04.232 "seek_data": false, 00:10:04.232 "copy": true, 00:10:04.232 "nvme_iov_md": false 00:10:04.232 }, 00:10:04.232 "memory_domains": [ 00:10:04.232 { 00:10:04.232 "dma_device_id": "system", 00:10:04.232 "dma_device_type": 1 00:10:04.232 }, 00:10:04.232 { 00:10:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.232 "dma_device_type": 2 00:10:04.232 } 00:10:04.232 ], 00:10:04.232 "driver_specific": {} 00:10:04.232 } 00:10:04.232 ]' 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:04.232 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.169 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.169 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.169 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.169 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.169 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:07.717 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:07.718 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:07.976 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.354 ************************************ 00:10:09.354 START TEST filesystem_ext4 00:10:09.354 ************************************ 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:09.354 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:09.354 mke2fs 1.47.0 (5-Feb-2023) 00:10:09.354 Discarding device blocks: 0/522240 done 00:10:09.354 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:09.354 Filesystem UUID: a5c41dd2-ee44-4d28-9037-671314dfdb86 00:10:09.354 Superblock backups stored on blocks: 00:10:09.354 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:09.354 00:10:09.354 Allocating group tables: 0/64 done 00:10:09.354 Writing inode tables: 0/64 done 00:10:09.354 Creating journal (8192 blocks): done 00:10:09.354 Writing superblocks and filesystem accounting information: 0/64 done 00:10:09.354 00:10:09.354 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:09.354 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2066412 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.922 00:10:15.922 real 0m5.851s 00:10:15.922 user 0m0.028s 00:10:15.922 sys 0m0.072s 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:15.922 ************************************ 00:10:15.922 END TEST filesystem_ext4 00:10:15.922 ************************************ 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.922 ************************************ 00:10:15.922 START TEST filesystem_btrfs 00:10:15.922 ************************************ 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:15.922 15:19:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:15.922 btrfs-progs v6.8.1 00:10:15.922 See https://btrfs.readthedocs.io for more information. 00:10:15.922 00:10:15.922 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:15.922 NOTE: several default settings have changed in version 5.15, please make sure 00:10:15.922 this does not affect your deployments: 00:10:15.922 - DUP for metadata (-m dup) 00:10:15.922 - enabled no-holes (-O no-holes) 00:10:15.922 - enabled free-space-tree (-R free-space-tree) 00:10:15.922 00:10:15.922 Label: (null) 00:10:15.922 UUID: 772900f0-fa69-413c-81d6-a5e516b40298 00:10:15.922 Node size: 16384 00:10:15.922 Sector size: 4096 (CPU page size: 4096) 00:10:15.922 Filesystem size: 510.00MiB 00:10:15.922 Block group profiles: 00:10:15.922 Data: single 8.00MiB 00:10:15.922 Metadata: DUP 32.00MiB 00:10:15.922 System: DUP 8.00MiB 00:10:15.922 SSD detected: yes 00:10:15.922 Zoned device: no 00:10:15.922 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:15.922 Checksum: crc32c 00:10:15.922 Number of devices: 1 00:10:15.922 Devices: 00:10:15.923 ID SIZE PATH 00:10:15.923 1 510.00MiB /dev/nvme0n1p1 00:10:15.923 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2066412 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.923 00:10:15.923 real 0m0.583s 00:10:15.923 user 0m0.027s 00:10:15.923 sys 0m0.113s 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.923 ************************************ 00:10:15.923 END TEST filesystem_btrfs 00:10:15.923 ************************************ 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.923 ************************************ 00:10:15.923 START TEST filesystem_xfs 00:10:15.923 ************************************ 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:15.923 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:15.923 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:15.923 = sectsz=512 attr=2, projid32bit=1 00:10:15.923 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:15.923 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:15.923 data = bsize=4096 blocks=130560, imaxpct=25 00:10:15.923 = sunit=0 swidth=0 blks 00:10:15.923 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:15.923 log =internal log bsize=4096 blocks=16384, version=2 00:10:15.923 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:15.923 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:16.489 Discarding blocks...Done. 00:10:16.489 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:16.490 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2066412 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.392 00:10:18.392 real 0m2.561s 00:10:18.392 user 0m0.020s 00:10:18.392 sys 0m0.079s 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.392 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.392 ************************************ 00:10:18.392 END TEST filesystem_xfs 00:10:18.392 ************************************ 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2066412 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2066412 ']' 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2066412 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066412 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066412' 00:10:18.392 killing process with pid 2066412 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2066412 00:10:18.392 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2066412 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:19.006 00:10:19.006 real 0m15.262s 00:10:19.006 user 0m59.935s 00:10:19.006 sys 0m1.401s 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.006 ************************************ 00:10:19.006 END TEST nvmf_filesystem_no_in_capsule 00:10:19.006 ************************************ 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.006 ************************************ 00:10:19.006 START TEST nvmf_filesystem_in_capsule 00:10:19.006 ************************************ 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2069170 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2069170 00:10:19.006 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2069170 ']' 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.007 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.007 [2024-11-20 15:19:22.730350] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:10:19.007 [2024-11-20 15:19:22.730390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.007 [2024-11-20 15:19:22.811243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.007 [2024-11-20 15:19:22.853600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.007 [2024-11-20 15:19:22.853638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.007 [2024-11-20 15:19:22.853646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.007 [2024-11-20 15:19:22.853652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.007 [2024-11-20 15:19:22.853657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.007 [2024-11-20 15:19:22.855101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.007 [2024-11-20 15:19:22.855210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.007 [2024-11-20 15:19:22.855319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.007 [2024-11-20 15:19:22.855318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.316 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 [2024-11-20 15:19:23.001339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 Malloc1 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.316 [2024-11-20 15:19:23.151683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:19.316 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:19.317 { 00:10:19.317 "name": "Malloc1", 00:10:19.317 "aliases": [ 00:10:19.317 "02bac9ed-91b5-4346-b2cd-280f2fdfc593" 00:10:19.317 ], 00:10:19.317 "product_name": "Malloc disk", 00:10:19.317 "block_size": 512, 00:10:19.317 "num_blocks": 1048576, 00:10:19.317 "uuid": "02bac9ed-91b5-4346-b2cd-280f2fdfc593", 00:10:19.317 "assigned_rate_limits": { 00:10:19.317 "rw_ios_per_sec": 0, 00:10:19.317 "rw_mbytes_per_sec": 0, 00:10:19.317 "r_mbytes_per_sec": 0, 00:10:19.317 "w_mbytes_per_sec": 0 00:10:19.317 }, 00:10:19.317 "claimed": true, 00:10:19.317 "claim_type": "exclusive_write", 00:10:19.317 "zoned": false, 00:10:19.317 "supported_io_types": { 00:10:19.317 "read": true, 00:10:19.317 "write": true, 00:10:19.317 "unmap": true, 00:10:19.317 "flush": true, 00:10:19.317 "reset": true, 00:10:19.317 "nvme_admin": false, 00:10:19.317 "nvme_io": false, 00:10:19.317 "nvme_io_md": false, 00:10:19.317 "write_zeroes": true, 00:10:19.317 "zcopy": true, 00:10:19.317 "get_zone_info": false, 00:10:19.317 "zone_management": false, 00:10:19.317 "zone_append": false, 00:10:19.317 "compare": false, 00:10:19.317 "compare_and_write": false, 00:10:19.317 "abort": true, 00:10:19.317 "seek_hole": false, 00:10:19.317 "seek_data": false, 00:10:19.317 "copy": true, 00:10:19.317 "nvme_iov_md": false 00:10:19.317 }, 00:10:19.317 "memory_domains": [ 00:10:19.317 { 00:10:19.317 "dma_device_id": "system", 00:10:19.317 "dma_device_type": 1 00:10:19.317 }, 00:10:19.317 { 00:10:19.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.317 "dma_device_type": 2 00:10:19.317 } 00:10:19.317 ], 00:10:19.317 "driver_specific": {} 00:10:19.317 } 00:10:19.317 ]' 00:10:19.317 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:19.575 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.975 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.975 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.975 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.975 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.975 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:22.879 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:23.138 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:23.397 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.334 ************************************ 00:10:24.334 START TEST filesystem_in_capsule_ext4 00:10:24.334 ************************************ 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:24.334 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:24.334 mke2fs 1.47.0 (5-Feb-2023) 00:10:24.334 Discarding device blocks: 0/522240 done 00:10:24.334 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:24.334 Filesystem UUID: d2b22bc0-c877-4da7-ac87-c6ff17d9c506 00:10:24.334 Superblock backups stored on blocks: 00:10:24.334 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:24.334 00:10:24.334 Allocating group tables: 0/64 done 00:10:24.334 Writing inode tables: 0/64 done 00:10:24.592 Creating journal (8192 blocks): done 00:10:24.592 Writing superblocks and filesystem accounting information: 0/64 done 00:10:24.592 00:10:24.592 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:24.592 15:19:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.858 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2069170 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.117 00:10:30.117 real 0m5.735s 00:10:30.117 user 0m0.027s 00:10:30.117 sys 0m0.072s 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.117 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:30.117 ************************************ 00:10:30.117 END TEST filesystem_in_capsule_ext4 00:10:30.117 ************************************ 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.118 ************************************ 00:10:30.118 START TEST filesystem_in_capsule_btrfs 00:10:30.118 ************************************ 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:30.118 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:30.376 btrfs-progs v6.8.1 00:10:30.376 See https://btrfs.readthedocs.io for more information. 00:10:30.376 00:10:30.376 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:30.376 NOTE: several default settings have changed in version 5.15, please make sure 00:10:30.376 this does not affect your deployments: 00:10:30.376 - DUP for metadata (-m dup) 00:10:30.376 - enabled no-holes (-O no-holes) 00:10:30.376 - enabled free-space-tree (-R free-space-tree) 00:10:30.376 00:10:30.376 Label: (null) 00:10:30.376 UUID: c7fa0dbf-a5f0-4f96-8538-e256f15b7071 00:10:30.376 Node size: 16384 00:10:30.376 Sector size: 4096 (CPU page size: 4096) 00:10:30.376 Filesystem size: 510.00MiB 00:10:30.376 Block group profiles: 00:10:30.376 Data: single 8.00MiB 00:10:30.376 Metadata: DUP 32.00MiB 00:10:30.376 System: DUP 8.00MiB 00:10:30.376 SSD detected: yes 00:10:30.376 Zoned device: no 00:10:30.376 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:30.376 Checksum: crc32c 00:10:30.376 Number of devices: 1 00:10:30.376 Devices: 00:10:30.376 ID SIZE PATH 00:10:30.376 1 510.00MiB /dev/nvme0n1p1 00:10:30.376 00:10:30.376 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:30.376 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2069170 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.635 00:10:30.635 real 0m0.579s 00:10:30.635 user 0m0.036s 00:10:30.635 sys 0m0.105s 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.635 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.635 ************************************ 00:10:30.635 END TEST filesystem_in_capsule_btrfs 00:10:30.635 ************************************ 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.894 ************************************ 00:10:30.894 START TEST filesystem_in_capsule_xfs 00:10:30.894 ************************************ 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:30.894 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:30.894 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:30.894 = sectsz=512 attr=2, projid32bit=1 00:10:30.894 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:30.894 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:30.894 data = bsize=4096 blocks=130560, imaxpct=25 00:10:30.894 = sunit=0 swidth=0 blks 00:10:30.894 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:30.894 log =internal log bsize=4096 blocks=16384, version=2 00:10:30.894 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:30.894 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:31.829 Discarding blocks...Done. 00:10:31.829 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:31.829 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2069170 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.358 00:10:34.358 real 0m3.343s 00:10:34.358 user 0m0.032s 00:10:34.358 sys 0m0.068s 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.358 ************************************ 00:10:34.358 END TEST filesystem_in_capsule_xfs 00:10:34.358 ************************************ 00:10:34.358 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2069170 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2069170 ']' 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2069170 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069170 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069170' 00:10:34.358 killing process with pid 2069170 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2069170 00:10:34.358 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2069170 00:10:34.924 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:34.924 00:10:34.924 real 0m15.860s 00:10:34.924 user 1m2.314s 00:10:34.924 sys 0m1.406s 00:10:34.924 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.924 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.924 ************************************ 00:10:34.924 END TEST nvmf_filesystem_in_capsule 00:10:34.924 ************************************ 00:10:34.924 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:34.924 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.925 rmmod nvme_tcp 00:10:34.925 rmmod nvme_fabrics 00:10:34.925 rmmod nvme_keyring 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.925 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.831 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.831 00:10:36.831 real 0m39.840s 00:10:36.831 user 2m4.331s 00:10:36.831 sys 0m7.496s 00:10:36.831 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.831 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.831 ************************************ 00:10:36.831 END TEST nvmf_filesystem 00:10:36.831 ************************************ 00:10:37.090 15:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:37.090 15:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.090 15:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.090 15:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.090 ************************************ 00:10:37.090 START TEST nvmf_target_discovery 00:10:37.091 ************************************ 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:37.091 * Looking for test storage... 00:10:37.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.091 --rc genhtml_branch_coverage=1 00:10:37.091 --rc genhtml_function_coverage=1 00:10:37.091 --rc genhtml_legend=1 00:10:37.091 --rc geninfo_all_blocks=1 00:10:37.091 --rc geninfo_unexecuted_blocks=1 00:10:37.091 00:10:37.091 ' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.091 --rc genhtml_branch_coverage=1 00:10:37.091 --rc genhtml_function_coverage=1 00:10:37.091 --rc genhtml_legend=1 00:10:37.091 --rc geninfo_all_blocks=1 00:10:37.091 --rc geninfo_unexecuted_blocks=1 00:10:37.091 00:10:37.091 ' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.091 --rc genhtml_branch_coverage=1 00:10:37.091 --rc genhtml_function_coverage=1 00:10:37.091 --rc genhtml_legend=1 00:10:37.091 --rc geninfo_all_blocks=1 00:10:37.091 --rc geninfo_unexecuted_blocks=1 00:10:37.091 00:10:37.091 ' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.091 --rc genhtml_branch_coverage=1 00:10:37.091 --rc genhtml_function_coverage=1 00:10:37.091 --rc genhtml_legend=1 00:10:37.091 --rc geninfo_all_blocks=1 00:10:37.091 --rc geninfo_unexecuted_blocks=1 00:10:37.091 00:10:37.091 ' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.091 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.092 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.351 15:19:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.351 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.351 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.351 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.351 15:19:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.924 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.925 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.925 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:10:43.925 00:10:43.925 --- 10.0.0.2 ping statistics --- 00:10:43.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.925 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:10:43.925 00:10:43.925 --- 10.0.0.1 ping statistics --- 00:10:43.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.925 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.925 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2075530 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2075530 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2075530 ']' 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 [2024-11-20 15:19:47.068165] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:10:43.925 [2024-11-20 15:19:47.068219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.925 [2024-11-20 15:19:47.150702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.925 [2024-11-20 15:19:47.192418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.925 [2024-11-20 15:19:47.192460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.925 [2024-11-20 15:19:47.192467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.925 [2024-11-20 15:19:47.192474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.925 [2024-11-20 15:19:47.192480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.925 [2024-11-20 15:19:47.193905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.925 [2024-11-20 15:19:47.194016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.925 [2024-11-20 15:19:47.194053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.925 [2024-11-20 15:19:47.194054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.925 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 [2024-11-20 15:19:47.343802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 Null1 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 [2024-11-20 15:19:47.389346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 Null2 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 Null3 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 Null4 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.926 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:43.926 00:10:43.926 Discovery Log Number of Records 6, Generation counter 6 00:10:43.926 =====Discovery Log Entry 0====== 00:10:43.926 trtype: tcp 00:10:43.926 adrfam: ipv4 00:10:43.926 subtype: current discovery subsystem 00:10:43.926 treq: not required 00:10:43.926 portid: 0 00:10:43.927 trsvcid: 4420 00:10:43.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: explicit discovery connections, duplicate discovery information 00:10:43.927 sectype: none 00:10:43.927 =====Discovery Log Entry 1====== 00:10:43.927 trtype: tcp 00:10:43.927 adrfam: ipv4 00:10:43.927 subtype: nvme subsystem 00:10:43.927 treq: not required 00:10:43.927 portid: 0 00:10:43.927 trsvcid: 4420 00:10:43.927 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: none 00:10:43.927 sectype: none 00:10:43.927 =====Discovery Log Entry 2====== 00:10:43.927 trtype: tcp 00:10:43.927 adrfam: ipv4 00:10:43.927 subtype: nvme subsystem 00:10:43.927 treq: not required 00:10:43.927 portid: 0 00:10:43.927 trsvcid: 4420 00:10:43.927 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: none 00:10:43.927 sectype: none 00:10:43.927 =====Discovery Log Entry 3====== 00:10:43.927 trtype: tcp 00:10:43.927 adrfam: ipv4 00:10:43.927 subtype: nvme subsystem 00:10:43.927 treq: not required 00:10:43.927 portid: 0 00:10:43.927 trsvcid: 4420 00:10:43.927 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: none 00:10:43.927 sectype: none 00:10:43.927 =====Discovery Log Entry 4====== 00:10:43.927 trtype: tcp 00:10:43.927 adrfam: ipv4 00:10:43.927 subtype: nvme subsystem 00:10:43.927 treq: not required 00:10:43.927 portid: 0 00:10:43.927 trsvcid: 4420 00:10:43.927 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: none 00:10:43.927 sectype: none 00:10:43.927 =====Discovery Log Entry 5====== 00:10:43.927 trtype: tcp 00:10:43.927 adrfam: ipv4 00:10:43.927 subtype: discovery subsystem referral 00:10:43.927 treq: not required 00:10:43.927 portid: 0 00:10:43.927 trsvcid: 4430 00:10:43.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:43.927 traddr: 10.0.0.2 00:10:43.927 eflags: none 00:10:43.927 sectype: none 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:43.927 Perform nvmf subsystem discovery via RPC 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 [ 00:10:43.927 { 00:10:43.927 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:43.927 "subtype": "Discovery", 00:10:43.927 "listen_addresses": [ 00:10:43.927 { 00:10:43.927 "trtype": "TCP", 00:10:43.927 "adrfam": "IPv4", 00:10:43.927 "traddr": "10.0.0.2", 00:10:43.927 "trsvcid": "4420" 00:10:43.927 } 00:10:43.927 ], 00:10:43.927 "allow_any_host": true, 00:10:43.927 "hosts": [] 00:10:43.927 }, 00:10:43.927 { 00:10:43.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.927 "subtype": "NVMe", 00:10:43.927 "listen_addresses": [ 00:10:43.927 { 00:10:43.927 "trtype": "TCP", 00:10:43.927 "adrfam": "IPv4", 00:10:43.927 "traddr": "10.0.0.2", 00:10:43.927 "trsvcid": "4420" 00:10:43.927 } 00:10:43.927 ], 00:10:43.927 "allow_any_host": true, 00:10:43.927 "hosts": [], 00:10:43.927 "serial_number": "SPDK00000000000001", 00:10:43.927 "model_number": "SPDK bdev Controller", 00:10:43.927 "max_namespaces": 32, 00:10:43.927 "min_cntlid": 1, 00:10:43.927 "max_cntlid": 65519, 00:10:43.927 "namespaces": [ 00:10:43.927 { 00:10:43.927 "nsid": 1, 00:10:43.927 "bdev_name": "Null1", 00:10:43.927 "name": "Null1", 00:10:43.927 "nguid": "C9D802442E3444C19C2142FB168865A2", 00:10:43.927 "uuid": "c9d80244-2e34-44c1-9c21-42fb168865a2" 00:10:43.927 } 00:10:43.927 ] 00:10:43.927 }, 00:10:43.927 { 00:10:43.927 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:43.927 "subtype": "NVMe", 00:10:43.927 "listen_addresses": [ 00:10:43.927 { 00:10:43.927 "trtype": "TCP", 00:10:43.927 "adrfam": "IPv4", 00:10:43.927 "traddr": "10.0.0.2", 00:10:43.927 "trsvcid": "4420" 00:10:43.927 } 00:10:43.927 ], 00:10:43.927 "allow_any_host": true, 00:10:43.927 "hosts": [], 00:10:43.927 "serial_number": "SPDK00000000000002", 00:10:43.927 "model_number": "SPDK bdev Controller", 00:10:43.927 "max_namespaces": 32, 00:10:43.927 "min_cntlid": 1, 00:10:43.927 "max_cntlid": 65519, 00:10:43.927 "namespaces": [ 00:10:43.927 { 00:10:43.927 "nsid": 1, 00:10:43.927 "bdev_name": "Null2", 00:10:43.927 "name": "Null2", 00:10:43.927 "nguid": "A15F138CB2F14160AEFE6BAFE73FD9B2", 00:10:43.927 "uuid": "a15f138c-b2f1-4160-aefe-6bafe73fd9b2" 00:10:43.927 } 00:10:43.927 ] 00:10:43.927 }, 00:10:43.927 { 00:10:43.927 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:43.927 "subtype": "NVMe", 00:10:43.927 "listen_addresses": [ 00:10:43.927 { 00:10:43.927 "trtype": "TCP", 00:10:43.927 "adrfam": "IPv4", 00:10:43.927 "traddr": "10.0.0.2", 00:10:43.927 "trsvcid": "4420" 00:10:43.927 } 00:10:43.927 ], 00:10:43.927 "allow_any_host": true, 00:10:43.927 "hosts": [], 00:10:43.927 "serial_number": "SPDK00000000000003", 00:10:43.927 "model_number": "SPDK bdev Controller", 00:10:43.927 "max_namespaces": 32, 00:10:43.927 "min_cntlid": 1, 00:10:43.927 "max_cntlid": 65519, 00:10:43.927 "namespaces": [ 00:10:43.927 { 00:10:43.927 "nsid": 1, 00:10:43.927 "bdev_name": "Null3", 00:10:43.927 "name": "Null3", 00:10:43.927 "nguid": "94E021766936474B9CD9375F43B72E4A", 00:10:43.927 "uuid": "94e02176-6936-474b-9cd9-375f43b72e4a" 00:10:43.927 } 00:10:43.927 ] 00:10:43.927 }, 00:10:43.927 { 00:10:43.927 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:43.927 "subtype": "NVMe", 00:10:43.927 "listen_addresses": [ 00:10:43.927 { 00:10:43.927 "trtype": "TCP", 00:10:43.927 "adrfam": "IPv4", 00:10:43.927 "traddr": "10.0.0.2", 00:10:43.927 "trsvcid": "4420" 00:10:43.927 } 00:10:43.927 ], 00:10:43.927 "allow_any_host": true, 00:10:43.927 "hosts": [], 00:10:43.927 "serial_number": "SPDK00000000000004", 00:10:43.927 "model_number": "SPDK bdev Controller", 00:10:43.927 "max_namespaces": 32, 00:10:43.927 "min_cntlid": 1, 00:10:43.927 "max_cntlid": 65519, 00:10:43.927 "namespaces": [ 00:10:43.927 { 00:10:43.927 "nsid": 1, 00:10:43.927 "bdev_name": "Null4", 00:10:43.927 "name": "Null4", 00:10:43.927 "nguid": "E919B7BB3BE44A71B9BBBAF261596E14", 00:10:43.927 "uuid": "e919b7bb-3be4-4a71-b9bb-baf261596e14" 00:10:43.927 } 00:10:43.927 ] 00:10:43.927 } 00:10:43.927 ] 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:43.927 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:43.928 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.186 rmmod nvme_tcp 00:10:44.186 rmmod nvme_fabrics 00:10:44.186 rmmod nvme_keyring 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2075530 ']' 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2075530 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2075530 ']' 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2075530 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2075530 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2075530' 00:10:44.186 killing process with pid 2075530 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2075530 00:10:44.186 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2075530 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.445 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.446 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.446 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.446 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.446 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.446 15:19:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.351 00:10:46.351 real 0m9.381s 00:10:46.351 user 0m5.466s 00:10:46.351 sys 0m4.937s 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.351 ************************************ 00:10:46.351 END TEST nvmf_target_discovery 00:10:46.351 ************************************ 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.351 15:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.352 15:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.352 ************************************ 00:10:46.352 START TEST nvmf_referrals 00:10:46.352 ************************************ 00:10:46.352 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:46.611 * Looking for test storage... 00:10:46.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.611 --rc genhtml_branch_coverage=1 00:10:46.611 --rc genhtml_function_coverage=1 00:10:46.611 --rc genhtml_legend=1 00:10:46.611 --rc geninfo_all_blocks=1 00:10:46.611 --rc geninfo_unexecuted_blocks=1 00:10:46.611 00:10:46.611 ' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.611 --rc genhtml_branch_coverage=1 00:10:46.611 --rc genhtml_function_coverage=1 00:10:46.611 --rc genhtml_legend=1 00:10:46.611 --rc geninfo_all_blocks=1 00:10:46.611 --rc geninfo_unexecuted_blocks=1 00:10:46.611 00:10:46.611 ' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.611 --rc genhtml_branch_coverage=1 00:10:46.611 --rc genhtml_function_coverage=1 00:10:46.611 --rc genhtml_legend=1 00:10:46.611 --rc geninfo_all_blocks=1 00:10:46.611 --rc geninfo_unexecuted_blocks=1 00:10:46.611 00:10:46.611 ' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.611 --rc genhtml_branch_coverage=1 00:10:46.611 --rc genhtml_function_coverage=1 00:10:46.611 --rc genhtml_legend=1 00:10:46.611 --rc geninfo_all_blocks=1 00:10:46.611 --rc geninfo_unexecuted_blocks=1 00:10:46.611 00:10:46.611 ' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:46.611 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.612 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:53.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:53.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:53.183 Found net devices under 0000:86:00.0: cvl_0_0 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:53.183 Found net devices under 0000:86:00.1: cvl_0_1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.183 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:10:53.184 00:10:53.184 --- 10.0.0.2 ping statistics --- 00:10:53.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.184 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:10:53.184 00:10:53.184 --- 10.0.0.1 ping statistics --- 00:10:53.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.184 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2079228 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2079228 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2079228 ']' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 [2024-11-20 15:19:56.524694] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:10:53.184 [2024-11-20 15:19:56.524736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.184 [2024-11-20 15:19:56.604133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.184 [2024-11-20 15:19:56.646783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.184 [2024-11-20 15:19:56.646820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.184 [2024-11-20 15:19:56.646827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.184 [2024-11-20 15:19:56.646833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.184 [2024-11-20 15:19:56.646838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.184 [2024-11-20 15:19:56.648329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.184 [2024-11-20 15:19:56.648433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.184 [2024-11-20 15:19:56.648541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.184 [2024-11-20 15:19:56.648542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 [2024-11-20 15:19:56.790045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 [2024-11-20 15:19:56.803338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.184 15:19:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:53.442 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.443 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.701 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.960 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:54.218 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:54.218 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:54.218 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.218 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.218 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:54.218 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:54.219 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.219 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:54.219 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.476 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:54.734 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.992 rmmod nvme_tcp 00:10:54.992 rmmod nvme_fabrics 00:10:54.992 rmmod nvme_keyring 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2079228 ']' 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2079228 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2079228 ']' 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2079228 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.992 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079228 00:10:55.251 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.251 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.251 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079228' 00:10:55.251 killing process with pid 2079228 00:10:55.251 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2079228 00:10:55.251 15:19:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2079228 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.251 15:19:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.787 00:10:57.787 real 0m10.922s 00:10:57.787 user 0m12.282s 00:10:57.787 sys 0m5.300s 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.787 ************************************ 00:10:57.787 END TEST nvmf_referrals 00:10:57.787 ************************************ 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.787 15:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.787 ************************************ 00:10:57.787 START TEST nvmf_connect_disconnect 00:10:57.787 ************************************ 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:57.788 * Looking for test storage... 00:10:57.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.788 --rc genhtml_branch_coverage=1 00:10:57.788 --rc genhtml_function_coverage=1 00:10:57.788 --rc genhtml_legend=1 00:10:57.788 --rc geninfo_all_blocks=1 00:10:57.788 --rc geninfo_unexecuted_blocks=1 00:10:57.788 00:10:57.788 ' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.788 --rc genhtml_branch_coverage=1 00:10:57.788 --rc genhtml_function_coverage=1 00:10:57.788 --rc genhtml_legend=1 00:10:57.788 --rc geninfo_all_blocks=1 00:10:57.788 --rc geninfo_unexecuted_blocks=1 00:10:57.788 00:10:57.788 ' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.788 --rc genhtml_branch_coverage=1 00:10:57.788 --rc genhtml_function_coverage=1 00:10:57.788 --rc genhtml_legend=1 00:10:57.788 --rc geninfo_all_blocks=1 00:10:57.788 --rc geninfo_unexecuted_blocks=1 00:10:57.788 00:10:57.788 ' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.788 --rc genhtml_branch_coverage=1 00:10:57.788 --rc genhtml_function_coverage=1 00:10:57.788 --rc genhtml_legend=1 00:10:57.788 --rc geninfo_all_blocks=1 00:10:57.788 --rc geninfo_unexecuted_blocks=1 00:10:57.788 00:10:57.788 ' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.788 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.789 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.361 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.362 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.362 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:11:04.362 00:11:04.362 --- 10.0.0.2 ping statistics --- 00:11:04.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.362 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:04.362 00:11:04.362 --- 10.0.0.1 ping statistics --- 00:11:04.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.362 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:04.362 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2083305 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2083305 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2083305 ']' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 [2024-11-20 15:20:07.484313] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:11:04.363 [2024-11-20 15:20:07.484364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.363 [2024-11-20 15:20:07.564753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.363 [2024-11-20 15:20:07.606574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.363 [2024-11-20 15:20:07.606613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.363 [2024-11-20 15:20:07.606621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.363 [2024-11-20 15:20:07.606627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.363 [2024-11-20 15:20:07.606632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.363 [2024-11-20 15:20:07.608244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.363 [2024-11-20 15:20:07.608353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.363 [2024-11-20 15:20:07.608438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.363 [2024-11-20 15:20:07.608438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 [2024-11-20 15:20:07.753960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 [2024-11-20 15:20:07.822687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:04.363 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:07.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.855 rmmod nvme_tcp 00:11:20.855 rmmod nvme_fabrics 00:11:20.855 rmmod nvme_keyring 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2083305 ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2083305 ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083305' 00:11:20.855 killing process with pid 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2083305 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.855 15:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.759 00:11:22.759 real 0m25.341s 00:11:22.759 user 1m8.823s 00:11:22.759 sys 0m5.835s 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.759 ************************************ 00:11:22.759 END TEST nvmf_connect_disconnect 00:11:22.759 ************************************ 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.759 ************************************ 00:11:22.759 START TEST nvmf_multitarget 00:11:22.759 ************************************ 00:11:22.759 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:23.019 * Looking for test storage... 00:11:23.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:23.019 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.020 --rc genhtml_branch_coverage=1 00:11:23.020 --rc genhtml_function_coverage=1 00:11:23.020 --rc genhtml_legend=1 00:11:23.020 --rc geninfo_all_blocks=1 00:11:23.020 --rc geninfo_unexecuted_blocks=1 00:11:23.020 00:11:23.020 ' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.020 --rc genhtml_branch_coverage=1 00:11:23.020 --rc genhtml_function_coverage=1 00:11:23.020 --rc genhtml_legend=1 00:11:23.020 --rc geninfo_all_blocks=1 00:11:23.020 --rc geninfo_unexecuted_blocks=1 00:11:23.020 00:11:23.020 ' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.020 --rc genhtml_branch_coverage=1 00:11:23.020 --rc genhtml_function_coverage=1 00:11:23.020 --rc genhtml_legend=1 00:11:23.020 --rc geninfo_all_blocks=1 00:11:23.020 --rc geninfo_unexecuted_blocks=1 00:11:23.020 00:11:23.020 ' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.020 --rc genhtml_branch_coverage=1 00:11:23.020 --rc genhtml_function_coverage=1 00:11:23.020 --rc genhtml_legend=1 00:11:23.020 --rc geninfo_all_blocks=1 00:11:23.020 --rc geninfo_unexecuted_blocks=1 00:11:23.020 00:11:23.020 ' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.020 15:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:29.627 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.627 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:29.627 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:29.628 Found net devices under 0000:86:00.0: cvl_0_0 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:29.628 Found net devices under 0000:86:00.1: cvl_0_1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:11:29.628 00:11:29.628 --- 10.0.0.2 ping statistics --- 00:11:29.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.628 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:29.628 00:11:29.628 --- 10.0.0.1 ping statistics --- 00:11:29.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.628 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2089708 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2089708 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2089708 ']' 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.628 15:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.628 [2024-11-20 15:20:32.917280] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:11:29.628 [2024-11-20 15:20:32.917332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.628 [2024-11-20 15:20:32.995640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.628 [2024-11-20 15:20:33.039032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.628 [2024-11-20 15:20:33.039070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.628 [2024-11-20 15:20:33.039077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.628 [2024-11-20 15:20:33.039083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.628 [2024-11-20 15:20:33.039088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.628 [2024-11-20 15:20:33.040515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.628 [2024-11-20 15:20:33.040630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.628 [2024-11-20 15:20:33.040737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.628 [2024-11-20 15:20:33.040738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.628 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:29.629 "nvmf_tgt_1" 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:29.629 "nvmf_tgt_2" 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.629 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:29.888 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:29.888 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:29.888 true 00:11:29.888 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:30.147 true 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.147 rmmod nvme_tcp 00:11:30.147 rmmod nvme_fabrics 00:11:30.147 rmmod nvme_keyring 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2089708 ']' 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2089708 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2089708 ']' 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2089708 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.147 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089708 00:11:30.147 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.147 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.147 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089708' 00:11:30.147 killing process with pid 2089708 00:11:30.147 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2089708 00:11:30.147 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2089708 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.406 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.945 00:11:32.945 real 0m9.600s 00:11:32.945 user 0m7.088s 00:11:32.945 sys 0m4.900s 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.945 ************************************ 00:11:32.945 END TEST nvmf_multitarget 00:11:32.945 ************************************ 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.945 ************************************ 00:11:32.945 START TEST nvmf_rpc 00:11:32.945 ************************************ 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.945 * Looking for test storage... 00:11:32.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:32.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.945 --rc genhtml_branch_coverage=1 00:11:32.945 --rc genhtml_function_coverage=1 00:11:32.945 --rc genhtml_legend=1 00:11:32.945 --rc geninfo_all_blocks=1 00:11:32.945 --rc geninfo_unexecuted_blocks=1 00:11:32.945 00:11:32.945 ' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:32.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.945 --rc genhtml_branch_coverage=1 00:11:32.945 --rc genhtml_function_coverage=1 00:11:32.945 --rc genhtml_legend=1 00:11:32.945 --rc geninfo_all_blocks=1 00:11:32.945 --rc geninfo_unexecuted_blocks=1 00:11:32.945 00:11:32.945 ' 00:11:32.945 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:32.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.945 --rc genhtml_branch_coverage=1 00:11:32.945 --rc genhtml_function_coverage=1 00:11:32.945 --rc genhtml_legend=1 00:11:32.945 --rc geninfo_all_blocks=1 00:11:32.945 --rc geninfo_unexecuted_blocks=1 00:11:32.945 00:11:32.945 ' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:32.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.946 --rc genhtml_branch_coverage=1 00:11:32.946 --rc genhtml_function_coverage=1 00:11:32.946 --rc genhtml_legend=1 00:11:32.946 --rc geninfo_all_blocks=1 00:11:32.946 --rc geninfo_unexecuted_blocks=1 00:11:32.946 00:11:32.946 ' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.946 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:39.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:39.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:39.523 Found net devices under 0000:86:00.0: cvl_0_0 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:39.523 Found net devices under 0000:86:00.1: cvl_0_1 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.523 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:11:39.524 00:11:39.524 --- 10.0.0.2 ping statistics --- 00:11:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.524 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:39.524 00:11:39.524 --- 10.0.0.1 ping statistics --- 00:11:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.524 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2093494 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2093494 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2093494 ']' 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.524 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.524 [2024-11-20 15:20:42.581444] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:11:39.524 [2024-11-20 15:20:42.581490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.524 [2024-11-20 15:20:42.660863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.524 [2024-11-20 15:20:42.702584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.524 [2024-11-20 15:20:42.702623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.524 [2024-11-20 15:20:42.702630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.524 [2024-11-20 15:20:42.702636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.524 [2024-11-20 15:20:42.702642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.524 [2024-11-20 15:20:42.704239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.524 [2024-11-20 15:20:42.704347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.524 [2024-11-20 15:20:42.704454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.524 [2024-11-20 15:20:42.704455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.524 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.524 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:39.524 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.524 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.524 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:39.783 "tick_rate": 2300000000, 00:11:39.783 "poll_groups": [ 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_000", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_001", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_002", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_003", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [] 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 }' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 [2024-11-20 15:20:43.571512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:39.783 "tick_rate": 2300000000, 00:11:39.783 "poll_groups": [ 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_000", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [ 00:11:39.783 { 00:11:39.783 "trtype": "TCP" 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_001", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [ 00:11:39.783 { 00:11:39.783 "trtype": "TCP" 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_002", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [ 00:11:39.783 { 00:11:39.783 "trtype": "TCP" 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 }, 00:11:39.783 { 00:11:39.783 "name": "nvmf_tgt_poll_group_003", 00:11:39.783 "admin_qpairs": 0, 00:11:39.783 "io_qpairs": 0, 00:11:39.783 "current_admin_qpairs": 0, 00:11:39.783 "current_io_qpairs": 0, 00:11:39.783 "pending_bdev_io": 0, 00:11:39.783 "completed_nvme_io": 0, 00:11:39.783 "transports": [ 00:11:39.783 { 00:11:39.783 "trtype": "TCP" 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 } 00:11:39.783 ] 00:11:39.783 }' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.783 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 Malloc1 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 [2024-11-20 15:20:43.753430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:40.042 [2024-11-20 15:20:43.782168] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:40.042 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:40.042 could not add new controller: failed to write to nvme-fabrics device 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.416 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.416 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.416 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.416 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.416 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:43.317 15:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.317 [2024-11-20 15:20:47.086683] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:43.317 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:43.317 could not add new controller: failed to write to nvme-fabrics device 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.317 15:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.691 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.691 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.691 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.691 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.691 15:20:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.590 [2024-11-20 15:20:50.411400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.590 15:20:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.963 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.963 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.963 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.963 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.963 15:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.861 [2024-11-20 15:20:53.715976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.861 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.862 15:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.235 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.235 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.235 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.236 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.236 15:20:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.136 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.136 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 [2024-11-20 15:20:57.063882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.329 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.329 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.329 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.329 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.329 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.338 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.338 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.338 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 [2024-11-20 15:21:00.452411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.596 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.967 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.967 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.967 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.967 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.967 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.867 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.867 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.868 [2024-11-20 15:21:03.755759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.868 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.126 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.126 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.059 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.059 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.059 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.059 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.059 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.590 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 [2024-11-20 15:21:07.066792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 [2024-11-20 15:21:07.114894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.590 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 [2024-11-20 15:21:07.163053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 [2024-11-20 15:21:07.211212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 [2024-11-20 15:21:07.259398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:03.591 "tick_rate": 2300000000, 00:12:03.591 "poll_groups": [ 00:12:03.591 { 00:12:03.591 "name": "nvmf_tgt_poll_group_000", 00:12:03.591 "admin_qpairs": 2, 00:12:03.591 "io_qpairs": 168, 00:12:03.591 "current_admin_qpairs": 0, 00:12:03.591 "current_io_qpairs": 0, 00:12:03.591 "pending_bdev_io": 0, 00:12:03.591 "completed_nvme_io": 221, 00:12:03.591 "transports": [ 00:12:03.591 { 00:12:03.591 "trtype": "TCP" 00:12:03.591 } 00:12:03.591 ] 00:12:03.591 }, 00:12:03.591 { 00:12:03.591 "name": "nvmf_tgt_poll_group_001", 00:12:03.591 "admin_qpairs": 2, 00:12:03.591 "io_qpairs": 168, 00:12:03.591 "current_admin_qpairs": 0, 00:12:03.591 "current_io_qpairs": 0, 00:12:03.591 "pending_bdev_io": 0, 00:12:03.591 "completed_nvme_io": 266, 00:12:03.591 "transports": [ 00:12:03.591 { 00:12:03.591 "trtype": "TCP" 00:12:03.591 } 00:12:03.591 ] 00:12:03.591 }, 00:12:03.591 { 00:12:03.591 "name": "nvmf_tgt_poll_group_002", 00:12:03.591 "admin_qpairs": 1, 00:12:03.591 "io_qpairs": 168, 00:12:03.591 "current_admin_qpairs": 0, 00:12:03.591 "current_io_qpairs": 0, 00:12:03.591 "pending_bdev_io": 0, 00:12:03.591 "completed_nvme_io": 267, 00:12:03.591 "transports": [ 00:12:03.591 { 00:12:03.591 "trtype": "TCP" 00:12:03.591 } 00:12:03.591 ] 00:12:03.591 }, 00:12:03.591 { 00:12:03.591 "name": "nvmf_tgt_poll_group_003", 00:12:03.591 "admin_qpairs": 2, 00:12:03.591 "io_qpairs": 168, 00:12:03.591 "current_admin_qpairs": 0, 00:12:03.591 "current_io_qpairs": 0, 00:12:03.591 "pending_bdev_io": 0, 00:12:03.591 "completed_nvme_io": 268, 00:12:03.591 "transports": [ 00:12:03.591 { 00:12:03.591 "trtype": "TCP" 00:12:03.591 } 00:12:03.591 ] 00:12:03.591 } 00:12:03.591 ] 00:12:03.591 }' 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:03.591 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.592 rmmod nvme_tcp 00:12:03.592 rmmod nvme_fabrics 00:12:03.592 rmmod nvme_keyring 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2093494 ']' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2093494 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2093494 ']' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2093494 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.592 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2093494 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2093494' 00:12:03.851 killing process with pid 2093494 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2093494 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2093494 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.851 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.389 00:12:06.389 real 0m33.452s 00:12:06.389 user 1m41.361s 00:12:06.389 sys 0m6.660s 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.389 ************************************ 00:12:06.389 END TEST nvmf_rpc 00:12:06.389 ************************************ 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.389 ************************************ 00:12:06.389 START TEST nvmf_invalid 00:12:06.389 ************************************ 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:06.389 * Looking for test storage... 00:12:06.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.389 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:06.389 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.390 --rc genhtml_branch_coverage=1 00:12:06.390 --rc genhtml_function_coverage=1 00:12:06.390 --rc genhtml_legend=1 00:12:06.390 --rc geninfo_all_blocks=1 00:12:06.390 --rc geninfo_unexecuted_blocks=1 00:12:06.390 00:12:06.390 ' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.390 --rc genhtml_branch_coverage=1 00:12:06.390 --rc genhtml_function_coverage=1 00:12:06.390 --rc genhtml_legend=1 00:12:06.390 --rc geninfo_all_blocks=1 00:12:06.390 --rc geninfo_unexecuted_blocks=1 00:12:06.390 00:12:06.390 ' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.390 --rc genhtml_branch_coverage=1 00:12:06.390 --rc genhtml_function_coverage=1 00:12:06.390 --rc genhtml_legend=1 00:12:06.390 --rc geninfo_all_blocks=1 00:12:06.390 --rc geninfo_unexecuted_blocks=1 00:12:06.390 00:12:06.390 ' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.390 --rc genhtml_branch_coverage=1 00:12:06.390 --rc genhtml_function_coverage=1 00:12:06.390 --rc genhtml_legend=1 00:12:06.390 --rc geninfo_all_blocks=1 00:12:06.390 --rc geninfo_unexecuted_blocks=1 00:12:06.390 00:12:06.390 ' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.390 15:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.964 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:12.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:12.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:12.965 Found net devices under 0000:86:00.0: cvl_0_0 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:12.965 Found net devices under 0000:86:00.1: cvl_0_1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.965 15:21:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:12:12.965 00:12:12.965 --- 10.0.0.2 ping statistics --- 00:12:12.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.965 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:12.965 00:12:12.965 --- 10.0.0.1 ping statistics --- 00:12:12.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.965 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2101814 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2101814 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2101814 ']' 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.965 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.965 [2024-11-20 15:21:16.136051] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:12.965 [2024-11-20 15:21:16.136103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.965 [2024-11-20 15:21:16.218074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.965 [2024-11-20 15:21:16.259310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.965 [2024-11-20 15:21:16.259352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.965 [2024-11-20 15:21:16.259360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.966 [2024-11-20 15:21:16.259366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.966 [2024-11-20 15:21:16.259371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.966 [2024-11-20 15:21:16.260815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.966 [2024-11-20 15:21:16.260920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.966 [2024-11-20 15:21:16.261007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.966 [2024-11-20 15:21:16.261008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27463 00:12:12.966 [2024-11-20 15:21:16.570047] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:12.966 { 00:12:12.966 "nqn": "nqn.2016-06.io.spdk:cnode27463", 00:12:12.966 "tgt_name": "foobar", 00:12:12.966 "method": "nvmf_create_subsystem", 00:12:12.966 "req_id": 1 00:12:12.966 } 00:12:12.966 Got JSON-RPC error response 00:12:12.966 response: 00:12:12.966 { 00:12:12.966 "code": -32603, 00:12:12.966 "message": "Unable to find target foobar" 00:12:12.966 }' 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:12.966 { 00:12:12.966 "nqn": "nqn.2016-06.io.spdk:cnode27463", 00:12:12.966 "tgt_name": "foobar", 00:12:12.966 "method": "nvmf_create_subsystem", 00:12:12.966 "req_id": 1 00:12:12.966 } 00:12:12.966 Got JSON-RPC error response 00:12:12.966 response: 00:12:12.966 { 00:12:12.966 "code": -32603, 00:12:12.966 "message": "Unable to find target foobar" 00:12:12.966 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14046 00:12:12.966 [2024-11-20 15:21:16.774780] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14046: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:12.966 { 00:12:12.966 "nqn": "nqn.2016-06.io.spdk:cnode14046", 00:12:12.966 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:12.966 "method": "nvmf_create_subsystem", 00:12:12.966 "req_id": 1 00:12:12.966 } 00:12:12.966 Got JSON-RPC error response 00:12:12.966 response: 00:12:12.966 { 00:12:12.966 "code": -32602, 00:12:12.966 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:12.966 }' 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:12.966 { 00:12:12.966 "nqn": "nqn.2016-06.io.spdk:cnode14046", 00:12:12.966 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:12.966 "method": "nvmf_create_subsystem", 00:12:12.966 "req_id": 1 00:12:12.966 } 00:12:12.966 Got JSON-RPC error response 00:12:12.966 response: 00:12:12.966 { 00:12:12.966 "code": -32602, 00:12:12.966 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:12.966 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:12.966 15:21:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28337 00:12:13.288 [2024-11-20 15:21:16.983479] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28337: invalid model number 'SPDK_Controller' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:13.288 { 00:12:13.288 "nqn": "nqn.2016-06.io.spdk:cnode28337", 00:12:13.288 "model_number": "SPDK_Controller\u001f", 00:12:13.288 "method": "nvmf_create_subsystem", 00:12:13.288 "req_id": 1 00:12:13.288 } 00:12:13.288 Got JSON-RPC error response 00:12:13.288 response: 00:12:13.288 { 00:12:13.288 "code": -32602, 00:12:13.288 "message": "Invalid MN SPDK_Controller\u001f" 00:12:13.288 }' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:13.288 { 00:12:13.288 "nqn": "nqn.2016-06.io.spdk:cnode28337", 00:12:13.288 "model_number": "SPDK_Controller\u001f", 00:12:13.288 "method": "nvmf_create_subsystem", 00:12:13.288 "req_id": 1 00:12:13.288 } 00:12:13.288 Got JSON-RPC error response 00:12:13.288 response: 00:12:13.288 { 00:12:13.288 "code": -32602, 00:12:13.288 "message": "Invalid MN SPDK_Controller\u001f" 00:12:13.288 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:13.288 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ')\=Ax!7^e?`s%rWx~LLO/' 00:12:13.289 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')\=Ax!7^e?`s%rWx~LLO/' nqn.2016-06.io.spdk:cnode3708 00:12:13.669 [2024-11-20 15:21:17.336732] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3708: invalid serial number ')\=Ax!7^e?`s%rWx~LLO/' 00:12:13.669 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:13.669 { 00:12:13.669 "nqn": "nqn.2016-06.io.spdk:cnode3708", 00:12:13.669 "serial_number": ")\\=Ax!7^e?`s%rWx~LLO/", 00:12:13.669 "method": "nvmf_create_subsystem", 00:12:13.669 "req_id": 1 00:12:13.669 } 00:12:13.669 Got JSON-RPC error response 00:12:13.669 response: 00:12:13.669 { 00:12:13.669 "code": -32602, 00:12:13.669 "message": "Invalid SN )\\=Ax!7^e?`s%rWx~LLO/" 00:12:13.669 }' 00:12:13.669 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:13.669 { 00:12:13.669 "nqn": "nqn.2016-06.io.spdk:cnode3708", 00:12:13.669 "serial_number": ")\\=Ax!7^e?`s%rWx~LLO/", 00:12:13.669 "method": "nvmf_create_subsystem", 00:12:13.669 "req_id": 1 00:12:13.669 } 00:12:13.669 Got JSON-RPC error response 00:12:13.669 response: 00:12:13.669 { 00:12:13.669 "code": -32602, 00:12:13.669 "message": "Invalid SN )\\=Ax!7^e?`s%rWx~LLO/" 00:12:13.669 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:13.670 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.671 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|;ic~['\''hnh/rBOe^<*3Q3ntG8'\''`r11UEE905%A{f/' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|;ic~['\''hnh/rBOe^<*3Q3ntG8'\''`r11UEE905%A{f/' nqn.2016-06.io.spdk:cnode18005 00:12:13.930 [2024-11-20 15:21:17.766159] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18005: invalid model number '|;ic~['hnh/rBOe^<*3Q3ntG8'`r11UEE905%A{f/' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:13.930 { 00:12:13.930 "nqn": "nqn.2016-06.io.spdk:cnode18005", 00:12:13.930 "model_number": "|;ic~['\''hnh/rBOe^<*3Q3ntG8'\''`r11UEE905%A{f/", 00:12:13.930 "method": "nvmf_create_subsystem", 00:12:13.930 "req_id": 1 00:12:13.930 } 00:12:13.930 Got JSON-RPC error response 00:12:13.930 response: 00:12:13.930 { 00:12:13.930 "code": -32602, 00:12:13.930 "message": "Invalid MN |;ic~['\''hnh/rBOe^<*3Q3ntG8'\''`r11UEE905%A{f/" 00:12:13.930 }' 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:13.930 { 00:12:13.930 "nqn": "nqn.2016-06.io.spdk:cnode18005", 00:12:13.930 "model_number": "|;ic~['hnh/rBOe^<*3Q3ntG8'`r11UEE905%A{f/", 00:12:13.930 "method": "nvmf_create_subsystem", 00:12:13.930 "req_id": 1 00:12:13.930 } 00:12:13.930 Got JSON-RPC error response 00:12:13.930 response: 00:12:13.930 { 00:12:13.930 "code": -32602, 00:12:13.930 "message": "Invalid MN |;ic~['hnh/rBOe^<*3Q3ntG8'`r11UEE905%A{f/" 00:12:13.930 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:13.930 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:14.189 [2024-11-20 15:21:17.966996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.189 15:21:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:14.447 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:14.447 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:14.447 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:14.447 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:14.447 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:14.705 [2024-11-20 15:21:18.384339] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:14.705 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:14.705 { 00:12:14.705 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:14.705 "listen_address": { 00:12:14.705 "trtype": "tcp", 00:12:14.705 "traddr": "", 00:12:14.705 "trsvcid": "4421" 00:12:14.705 }, 00:12:14.705 "method": "nvmf_subsystem_remove_listener", 00:12:14.705 "req_id": 1 00:12:14.705 } 00:12:14.705 Got JSON-RPC error response 00:12:14.705 response: 00:12:14.705 { 00:12:14.705 "code": -32602, 00:12:14.705 "message": "Invalid parameters" 00:12:14.705 }' 00:12:14.705 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:14.705 { 00:12:14.705 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:14.705 "listen_address": { 00:12:14.705 "trtype": "tcp", 00:12:14.705 "traddr": "", 00:12:14.705 "trsvcid": "4421" 00:12:14.705 }, 00:12:14.705 "method": "nvmf_subsystem_remove_listener", 00:12:14.705 "req_id": 1 00:12:14.705 } 00:12:14.705 Got JSON-RPC error response 00:12:14.705 response: 00:12:14.705 { 00:12:14.705 "code": -32602, 00:12:14.705 "message": "Invalid parameters" 00:12:14.705 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:14.705 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32643 -i 0 00:12:14.705 [2024-11-20 15:21:18.588997] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32643: invalid cntlid range [0-65519] 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:14.964 { 00:12:14.964 "nqn": "nqn.2016-06.io.spdk:cnode32643", 00:12:14.964 "min_cntlid": 0, 00:12:14.964 "method": "nvmf_create_subsystem", 00:12:14.964 "req_id": 1 00:12:14.964 } 00:12:14.964 Got JSON-RPC error response 00:12:14.964 response: 00:12:14.964 { 00:12:14.964 "code": -32602, 00:12:14.964 "message": "Invalid cntlid range [0-65519]" 00:12:14.964 }' 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:14.964 { 00:12:14.964 "nqn": "nqn.2016-06.io.spdk:cnode32643", 00:12:14.964 "min_cntlid": 0, 00:12:14.964 "method": "nvmf_create_subsystem", 00:12:14.964 "req_id": 1 00:12:14.964 } 00:12:14.964 Got JSON-RPC error response 00:12:14.964 response: 00:12:14.964 { 00:12:14.964 "code": -32602, 00:12:14.964 "message": "Invalid cntlid range [0-65519]" 00:12:14.964 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5734 -i 65520 00:12:14.964 [2024-11-20 15:21:18.801705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5734: invalid cntlid range [65520-65519] 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:14.964 { 00:12:14.964 "nqn": "nqn.2016-06.io.spdk:cnode5734", 00:12:14.964 "min_cntlid": 65520, 00:12:14.964 "method": "nvmf_create_subsystem", 00:12:14.964 "req_id": 1 00:12:14.964 } 00:12:14.964 Got JSON-RPC error response 00:12:14.964 response: 00:12:14.964 { 00:12:14.964 "code": -32602, 00:12:14.964 "message": "Invalid cntlid range [65520-65519]" 00:12:14.964 }' 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:14.964 { 00:12:14.964 "nqn": "nqn.2016-06.io.spdk:cnode5734", 00:12:14.964 "min_cntlid": 65520, 00:12:14.964 "method": "nvmf_create_subsystem", 00:12:14.964 "req_id": 1 00:12:14.964 } 00:12:14.964 Got JSON-RPC error response 00:12:14.964 response: 00:12:14.964 { 00:12:14.964 "code": -32602, 00:12:14.964 "message": "Invalid cntlid range [65520-65519]" 00:12:14.964 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.964 15:21:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30150 -I 0 00:12:15.222 [2024-11-20 15:21:19.006417] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30150: invalid cntlid range [1-0] 00:12:15.222 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:15.222 { 00:12:15.222 "nqn": "nqn.2016-06.io.spdk:cnode30150", 00:12:15.222 "max_cntlid": 0, 00:12:15.222 "method": "nvmf_create_subsystem", 00:12:15.222 "req_id": 1 00:12:15.222 } 00:12:15.222 Got JSON-RPC error response 00:12:15.222 response: 00:12:15.222 { 00:12:15.222 "code": -32602, 00:12:15.222 "message": "Invalid cntlid range [1-0]" 00:12:15.222 }' 00:12:15.222 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:15.222 { 00:12:15.222 "nqn": "nqn.2016-06.io.spdk:cnode30150", 00:12:15.222 "max_cntlid": 0, 00:12:15.222 "method": "nvmf_create_subsystem", 00:12:15.222 "req_id": 1 00:12:15.222 } 00:12:15.222 Got JSON-RPC error response 00:12:15.222 response: 00:12:15.222 { 00:12:15.222 "code": -32602, 00:12:15.222 "message": "Invalid cntlid range [1-0]" 00:12:15.222 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:15.222 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14402 -I 65520 00:12:15.479 [2024-11-20 15:21:19.215167] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14402: invalid cntlid range [1-65520] 00:12:15.479 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:15.479 { 00:12:15.479 "nqn": "nqn.2016-06.io.spdk:cnode14402", 00:12:15.479 "max_cntlid": 65520, 00:12:15.479 "method": "nvmf_create_subsystem", 00:12:15.479 "req_id": 1 00:12:15.479 } 00:12:15.479 Got JSON-RPC error response 00:12:15.479 response: 00:12:15.479 { 00:12:15.479 "code": -32602, 00:12:15.479 "message": "Invalid cntlid range [1-65520]" 00:12:15.479 }' 00:12:15.479 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:15.479 { 00:12:15.479 "nqn": "nqn.2016-06.io.spdk:cnode14402", 00:12:15.479 "max_cntlid": 65520, 00:12:15.479 "method": "nvmf_create_subsystem", 00:12:15.479 "req_id": 1 00:12:15.479 } 00:12:15.479 Got JSON-RPC error response 00:12:15.479 response: 00:12:15.479 { 00:12:15.479 "code": -32602, 00:12:15.479 "message": "Invalid cntlid range [1-65520]" 00:12:15.479 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:15.479 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17949 -i 6 -I 5 00:12:15.738 [2024-11-20 15:21:19.411833] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17949: invalid cntlid range [6-5] 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:15.738 { 00:12:15.738 "nqn": "nqn.2016-06.io.spdk:cnode17949", 00:12:15.738 "min_cntlid": 6, 00:12:15.738 "max_cntlid": 5, 00:12:15.738 "method": "nvmf_create_subsystem", 00:12:15.738 "req_id": 1 00:12:15.738 } 00:12:15.738 Got JSON-RPC error response 00:12:15.738 response: 00:12:15.738 { 00:12:15.738 "code": -32602, 00:12:15.738 "message": "Invalid cntlid range [6-5]" 00:12:15.738 }' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:15.738 { 00:12:15.738 "nqn": "nqn.2016-06.io.spdk:cnode17949", 00:12:15.738 "min_cntlid": 6, 00:12:15.738 "max_cntlid": 5, 00:12:15.738 "method": "nvmf_create_subsystem", 00:12:15.738 "req_id": 1 00:12:15.738 } 00:12:15.738 Got JSON-RPC error response 00:12:15.738 response: 00:12:15.738 { 00:12:15.738 "code": -32602, 00:12:15.738 "message": "Invalid cntlid range [6-5]" 00:12:15.738 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:15.738 { 00:12:15.738 "name": "foobar", 00:12:15.738 "method": "nvmf_delete_target", 00:12:15.738 "req_id": 1 00:12:15.738 } 00:12:15.738 Got JSON-RPC error response 00:12:15.738 response: 00:12:15.738 { 00:12:15.738 "code": -32602, 00:12:15.738 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:15.738 }' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:15.738 { 00:12:15.738 "name": "foobar", 00:12:15.738 "method": "nvmf_delete_target", 00:12:15.738 "req_id": 1 00:12:15.738 } 00:12:15.738 Got JSON-RPC error response 00:12:15.738 response: 00:12:15.738 { 00:12:15.738 "code": -32602, 00:12:15.738 "message": "The specified target doesn't exist, cannot delete it." 00:12:15.738 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.738 rmmod nvme_tcp 00:12:15.738 rmmod nvme_fabrics 00:12:15.738 rmmod nvme_keyring 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2101814 ']' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2101814 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2101814 ']' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2101814 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2101814 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2101814' 00:12:15.738 killing process with pid 2101814 00:12:15.738 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2101814 00:12:15.739 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2101814 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.998 15:21:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.535 00:12:18.535 real 0m12.013s 00:12:18.535 user 0m18.446s 00:12:18.535 sys 0m5.326s 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:18.535 ************************************ 00:12:18.535 END TEST nvmf_invalid 00:12:18.535 ************************************ 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.535 ************************************ 00:12:18.535 START TEST nvmf_connect_stress 00:12:18.535 ************************************ 00:12:18.535 15:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:18.535 * Looking for test storage... 00:12:18.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.535 --rc genhtml_branch_coverage=1 00:12:18.535 --rc genhtml_function_coverage=1 00:12:18.535 --rc genhtml_legend=1 00:12:18.535 --rc geninfo_all_blocks=1 00:12:18.535 --rc geninfo_unexecuted_blocks=1 00:12:18.535 00:12:18.535 ' 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.535 --rc genhtml_branch_coverage=1 00:12:18.535 --rc genhtml_function_coverage=1 00:12:18.535 --rc genhtml_legend=1 00:12:18.535 --rc geninfo_all_blocks=1 00:12:18.535 --rc geninfo_unexecuted_blocks=1 00:12:18.535 00:12:18.535 ' 00:12:18.535 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.535 --rc genhtml_branch_coverage=1 00:12:18.535 --rc genhtml_function_coverage=1 00:12:18.535 --rc genhtml_legend=1 00:12:18.535 --rc geninfo_all_blocks=1 00:12:18.535 --rc geninfo_unexecuted_blocks=1 00:12:18.535 00:12:18.535 ' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.536 --rc genhtml_branch_coverage=1 00:12:18.536 --rc genhtml_function_coverage=1 00:12:18.536 --rc genhtml_legend=1 00:12:18.536 --rc geninfo_all_blocks=1 00:12:18.536 --rc geninfo_unexecuted_blocks=1 00:12:18.536 00:12:18.536 ' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.536 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:25.104 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.104 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:25.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:25.105 Found net devices under 0000:86:00.0: cvl_0_0 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:25.105 Found net devices under 0000:86:00.1: cvl_0_1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.105 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:12:25.105 00:12:25.105 --- 10.0.0.2 ping statistics --- 00:12:25.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.105 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:25.105 00:12:25.105 --- 10.0.0.1 ping statistics --- 00:12:25.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.105 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2106008 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2106008 00:12:25.105 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2106008 ']' 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 [2024-11-20 15:21:28.163035] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:25.106 [2024-11-20 15:21:28.163088] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.106 [2024-11-20 15:21:28.242356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.106 [2024-11-20 15:21:28.284797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.106 [2024-11-20 15:21:28.284834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.106 [2024-11-20 15:21:28.284845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.106 [2024-11-20 15:21:28.284851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.106 [2024-11-20 15:21:28.284856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.106 [2024-11-20 15:21:28.286325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.106 [2024-11-20 15:21:28.286444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.106 [2024-11-20 15:21:28.286445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 [2024-11-20 15:21:28.427323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 [2024-11-20 15:21:28.447538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 NULL1 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2106068 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:25.106 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.107 15:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.364 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.364 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:25.364 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.364 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.364 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.621 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.621 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:25.621 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.621 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.621 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.185 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.185 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:26.185 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.185 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.185 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.443 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.443 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:26.443 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.443 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.443 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.700 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.700 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:26.700 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.700 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.700 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.958 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.958 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:26.958 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.958 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.958 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.524 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.524 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:27.524 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.524 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.524 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.781 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.781 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:27.781 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.781 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.781 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:28.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.039 15:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.297 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.297 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:28.297 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.297 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.297 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.554 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.554 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:28.555 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.555 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.555 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.120 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.120 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:29.120 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.120 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.120 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.378 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.378 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:29.378 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.378 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.378 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.636 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.636 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:29.636 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.636 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.636 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:29.893 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.893 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.459 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.459 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:30.459 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.459 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.459 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.717 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.717 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:30.717 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.717 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.717 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.975 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:30.975 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.975 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.975 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.233 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.233 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:31.233 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.233 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.233 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.491 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.491 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:31.491 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.491 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.491 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:32.056 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.056 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.314 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.314 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:32.314 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.314 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.314 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.572 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.572 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:32.572 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.572 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.572 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.829 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.829 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:32.829 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.829 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.829 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.395 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.395 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:33.395 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.395 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.395 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.652 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.652 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:33.652 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.652 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.652 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.910 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.910 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:33.910 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.910 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.910 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.168 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.168 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:34.168 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.168 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.168 15:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.426 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.426 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:34.426 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.426 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.426 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.684 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2106068 00:12:34.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2106068) - No such process 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2106068 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.942 rmmod nvme_tcp 00:12:34.942 rmmod nvme_fabrics 00:12:34.942 rmmod nvme_keyring 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2106008 ']' 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2106008 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2106008 ']' 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2106008 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106008 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106008' 00:12:34.942 killing process with pid 2106008 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2106008 00:12:34.942 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2106008 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.202 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.108 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.108 00:12:37.108 real 0m19.043s 00:12:37.108 user 0m39.643s 00:12:37.108 sys 0m8.435s 00:12:37.108 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.108 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.108 ************************************ 00:12:37.108 END TEST nvmf_connect_stress 00:12:37.108 ************************************ 00:12:37.368 15:21:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:37.368 15:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.368 15:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.368 15:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.368 ************************************ 00:12:37.369 START TEST nvmf_fused_ordering 00:12:37.369 ************************************ 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:37.369 * Looking for test storage... 00:12:37.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.369 --rc genhtml_branch_coverage=1 00:12:37.369 --rc genhtml_function_coverage=1 00:12:37.369 --rc genhtml_legend=1 00:12:37.369 --rc geninfo_all_blocks=1 00:12:37.369 --rc geninfo_unexecuted_blocks=1 00:12:37.369 00:12:37.369 ' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.369 --rc genhtml_branch_coverage=1 00:12:37.369 --rc genhtml_function_coverage=1 00:12:37.369 --rc genhtml_legend=1 00:12:37.369 --rc geninfo_all_blocks=1 00:12:37.369 --rc geninfo_unexecuted_blocks=1 00:12:37.369 00:12:37.369 ' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.369 --rc genhtml_branch_coverage=1 00:12:37.369 --rc genhtml_function_coverage=1 00:12:37.369 --rc genhtml_legend=1 00:12:37.369 --rc geninfo_all_blocks=1 00:12:37.369 --rc geninfo_unexecuted_blocks=1 00:12:37.369 00:12:37.369 ' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.369 --rc genhtml_branch_coverage=1 00:12:37.369 --rc genhtml_function_coverage=1 00:12:37.369 --rc genhtml_legend=1 00:12:37.369 --rc geninfo_all_blocks=1 00:12:37.369 --rc geninfo_unexecuted_blocks=1 00:12:37.369 00:12:37.369 ' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.369 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.370 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.370 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.370 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.629 15:21:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.202 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.203 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.203 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.203 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:12:44.203 00:12:44.203 --- 10.0.0.2 ping statistics --- 00:12:44.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.203 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:44.203 00:12:44.203 --- 10.0.0.1 ping statistics --- 00:12:44.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.203 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2111411 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2111411 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2111411 ']' 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.203 [2024-11-20 15:21:47.339630] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:44.203 [2024-11-20 15:21:47.339679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.203 [2024-11-20 15:21:47.405363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.203 [2024-11-20 15:21:47.446433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.203 [2024-11-20 15:21:47.446467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.203 [2024-11-20 15:21:47.446475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.203 [2024-11-20 15:21:47.446481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.203 [2024-11-20 15:21:47.446486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.203 [2024-11-20 15:21:47.447056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.203 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 [2024-11-20 15:21:47.590533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 [2024-11-20 15:21:47.610715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 NULL1 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:44.204 [2024-11-20 15:21:47.670695] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:44.204 [2024-11-20 15:21:47.670728] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111430 ] 00:12:44.204 Attached to nqn.2016-06.io.spdk:cnode1 00:12:44.204 Namespace ID: 1 size: 1GB 00:12:44.204 fused_ordering(0) 00:12:44.204 fused_ordering(1) 00:12:44.204 fused_ordering(2) 00:12:44.204 fused_ordering(3) 00:12:44.204 fused_ordering(4) 00:12:44.204 fused_ordering(5) 00:12:44.204 fused_ordering(6) 00:12:44.204 fused_ordering(7) 00:12:44.204 fused_ordering(8) 00:12:44.204 fused_ordering(9) 00:12:44.204 fused_ordering(10) 00:12:44.204 fused_ordering(11) 00:12:44.204 fused_ordering(12) 00:12:44.204 fused_ordering(13) 00:12:44.204 fused_ordering(14) 00:12:44.204 fused_ordering(15) 00:12:44.204 fused_ordering(16) 00:12:44.204 fused_ordering(17) 00:12:44.204 fused_ordering(18) 00:12:44.204 fused_ordering(19) 00:12:44.204 fused_ordering(20) 00:12:44.204 fused_ordering(21) 00:12:44.204 fused_ordering(22) 00:12:44.204 fused_ordering(23) 00:12:44.204 fused_ordering(24) 00:12:44.204 fused_ordering(25) 00:12:44.204 fused_ordering(26) 00:12:44.204 fused_ordering(27) 00:12:44.204 fused_ordering(28) 00:12:44.204 fused_ordering(29) 00:12:44.204 fused_ordering(30) 00:12:44.204 fused_ordering(31) 00:12:44.204 fused_ordering(32) 00:12:44.204 fused_ordering(33) 00:12:44.204 fused_ordering(34) 00:12:44.204 fused_ordering(35) 00:12:44.204 fused_ordering(36) 00:12:44.204 fused_ordering(37) 00:12:44.204 fused_ordering(38) 00:12:44.204 fused_ordering(39) 00:12:44.204 fused_ordering(40) 00:12:44.204 fused_ordering(41) 00:12:44.204 fused_ordering(42) 00:12:44.204 fused_ordering(43) 00:12:44.204 fused_ordering(44) 00:12:44.204 fused_ordering(45) 00:12:44.204 fused_ordering(46) 00:12:44.204 fused_ordering(47) 00:12:44.204 fused_ordering(48) 00:12:44.204 fused_ordering(49) 00:12:44.204 fused_ordering(50) 00:12:44.204 fused_ordering(51) 00:12:44.204 fused_ordering(52) 00:12:44.204 fused_ordering(53) 00:12:44.204 fused_ordering(54) 00:12:44.204 fused_ordering(55) 00:12:44.204 fused_ordering(56) 00:12:44.204 fused_ordering(57) 00:12:44.204 fused_ordering(58) 00:12:44.204 fused_ordering(59) 00:12:44.204 fused_ordering(60) 00:12:44.204 fused_ordering(61) 00:12:44.204 fused_ordering(62) 00:12:44.204 fused_ordering(63) 00:12:44.204 fused_ordering(64) 00:12:44.204 fused_ordering(65) 00:12:44.204 fused_ordering(66) 00:12:44.204 fused_ordering(67) 00:12:44.204 fused_ordering(68) 00:12:44.204 fused_ordering(69) 00:12:44.204 fused_ordering(70) 00:12:44.204 fused_ordering(71) 00:12:44.204 fused_ordering(72) 00:12:44.204 fused_ordering(73) 00:12:44.204 fused_ordering(74) 00:12:44.204 fused_ordering(75) 00:12:44.204 fused_ordering(76) 00:12:44.204 fused_ordering(77) 00:12:44.204 fused_ordering(78) 00:12:44.204 fused_ordering(79) 00:12:44.204 fused_ordering(80) 00:12:44.204 fused_ordering(81) 00:12:44.204 fused_ordering(82) 00:12:44.204 fused_ordering(83) 00:12:44.204 fused_ordering(84) 00:12:44.204 fused_ordering(85) 00:12:44.204 fused_ordering(86) 00:12:44.204 fused_ordering(87) 00:12:44.204 fused_ordering(88) 00:12:44.204 fused_ordering(89) 00:12:44.204 fused_ordering(90) 00:12:44.204 fused_ordering(91) 00:12:44.204 fused_ordering(92) 00:12:44.204 fused_ordering(93) 00:12:44.204 fused_ordering(94) 00:12:44.204 fused_ordering(95) 00:12:44.204 fused_ordering(96) 00:12:44.204 fused_ordering(97) 00:12:44.204 fused_ordering(98) 00:12:44.204 fused_ordering(99) 00:12:44.204 fused_ordering(100) 00:12:44.204 fused_ordering(101) 00:12:44.204 fused_ordering(102) 00:12:44.204 fused_ordering(103) 00:12:44.204 fused_ordering(104) 00:12:44.204 fused_ordering(105) 00:12:44.204 fused_ordering(106) 00:12:44.204 fused_ordering(107) 00:12:44.204 fused_ordering(108) 00:12:44.204 fused_ordering(109) 00:12:44.204 fused_ordering(110) 00:12:44.204 fused_ordering(111) 00:12:44.204 fused_ordering(112) 00:12:44.204 fused_ordering(113) 00:12:44.204 fused_ordering(114) 00:12:44.204 fused_ordering(115) 00:12:44.204 fused_ordering(116) 00:12:44.204 fused_ordering(117) 00:12:44.204 fused_ordering(118) 00:12:44.204 fused_ordering(119) 00:12:44.204 fused_ordering(120) 00:12:44.204 fused_ordering(121) 00:12:44.204 fused_ordering(122) 00:12:44.204 fused_ordering(123) 00:12:44.204 fused_ordering(124) 00:12:44.204 fused_ordering(125) 00:12:44.204 fused_ordering(126) 00:12:44.204 fused_ordering(127) 00:12:44.204 fused_ordering(128) 00:12:44.204 fused_ordering(129) 00:12:44.204 fused_ordering(130) 00:12:44.204 fused_ordering(131) 00:12:44.204 fused_ordering(132) 00:12:44.204 fused_ordering(133) 00:12:44.204 fused_ordering(134) 00:12:44.204 fused_ordering(135) 00:12:44.204 fused_ordering(136) 00:12:44.204 fused_ordering(137) 00:12:44.204 fused_ordering(138) 00:12:44.204 fused_ordering(139) 00:12:44.204 fused_ordering(140) 00:12:44.204 fused_ordering(141) 00:12:44.204 fused_ordering(142) 00:12:44.204 fused_ordering(143) 00:12:44.204 fused_ordering(144) 00:12:44.204 fused_ordering(145) 00:12:44.204 fused_ordering(146) 00:12:44.204 fused_ordering(147) 00:12:44.204 fused_ordering(148) 00:12:44.204 fused_ordering(149) 00:12:44.204 fused_ordering(150) 00:12:44.204 fused_ordering(151) 00:12:44.204 fused_ordering(152) 00:12:44.204 fused_ordering(153) 00:12:44.204 fused_ordering(154) 00:12:44.204 fused_ordering(155) 00:12:44.204 fused_ordering(156) 00:12:44.205 fused_ordering(157) 00:12:44.205 fused_ordering(158) 00:12:44.205 fused_ordering(159) 00:12:44.205 fused_ordering(160) 00:12:44.205 fused_ordering(161) 00:12:44.205 fused_ordering(162) 00:12:44.205 fused_ordering(163) 00:12:44.205 fused_ordering(164) 00:12:44.205 fused_ordering(165) 00:12:44.205 fused_ordering(166) 00:12:44.205 fused_ordering(167) 00:12:44.205 fused_ordering(168) 00:12:44.205 fused_ordering(169) 00:12:44.205 fused_ordering(170) 00:12:44.205 fused_ordering(171) 00:12:44.205 fused_ordering(172) 00:12:44.205 fused_ordering(173) 00:12:44.205 fused_ordering(174) 00:12:44.205 fused_ordering(175) 00:12:44.205 fused_ordering(176) 00:12:44.205 fused_ordering(177) 00:12:44.205 fused_ordering(178) 00:12:44.205 fused_ordering(179) 00:12:44.205 fused_ordering(180) 00:12:44.205 fused_ordering(181) 00:12:44.205 fused_ordering(182) 00:12:44.205 fused_ordering(183) 00:12:44.205 fused_ordering(184) 00:12:44.205 fused_ordering(185) 00:12:44.205 fused_ordering(186) 00:12:44.205 fused_ordering(187) 00:12:44.205 fused_ordering(188) 00:12:44.205 fused_ordering(189) 00:12:44.205 fused_ordering(190) 00:12:44.205 fused_ordering(191) 00:12:44.205 fused_ordering(192) 00:12:44.205 fused_ordering(193) 00:12:44.205 fused_ordering(194) 00:12:44.205 fused_ordering(195) 00:12:44.205 fused_ordering(196) 00:12:44.205 fused_ordering(197) 00:12:44.205 fused_ordering(198) 00:12:44.205 fused_ordering(199) 00:12:44.205 fused_ordering(200) 00:12:44.205 fused_ordering(201) 00:12:44.205 fused_ordering(202) 00:12:44.205 fused_ordering(203) 00:12:44.205 fused_ordering(204) 00:12:44.205 fused_ordering(205) 00:12:44.464 fused_ordering(206) 00:12:44.464 fused_ordering(207) 00:12:44.464 fused_ordering(208) 00:12:44.464 fused_ordering(209) 00:12:44.464 fused_ordering(210) 00:12:44.464 fused_ordering(211) 00:12:44.464 fused_ordering(212) 00:12:44.464 fused_ordering(213) 00:12:44.464 fused_ordering(214) 00:12:44.464 fused_ordering(215) 00:12:44.464 fused_ordering(216) 00:12:44.464 fused_ordering(217) 00:12:44.464 fused_ordering(218) 00:12:44.464 fused_ordering(219) 00:12:44.464 fused_ordering(220) 00:12:44.464 fused_ordering(221) 00:12:44.464 fused_ordering(222) 00:12:44.464 fused_ordering(223) 00:12:44.464 fused_ordering(224) 00:12:44.464 fused_ordering(225) 00:12:44.464 fused_ordering(226) 00:12:44.464 fused_ordering(227) 00:12:44.464 fused_ordering(228) 00:12:44.464 fused_ordering(229) 00:12:44.464 fused_ordering(230) 00:12:44.464 fused_ordering(231) 00:12:44.464 fused_ordering(232) 00:12:44.464 fused_ordering(233) 00:12:44.464 fused_ordering(234) 00:12:44.464 fused_ordering(235) 00:12:44.464 fused_ordering(236) 00:12:44.464 fused_ordering(237) 00:12:44.464 fused_ordering(238) 00:12:44.464 fused_ordering(239) 00:12:44.464 fused_ordering(240) 00:12:44.464 fused_ordering(241) 00:12:44.464 fused_ordering(242) 00:12:44.464 fused_ordering(243) 00:12:44.464 fused_ordering(244) 00:12:44.464 fused_ordering(245) 00:12:44.464 fused_ordering(246) 00:12:44.464 fused_ordering(247) 00:12:44.464 fused_ordering(248) 00:12:44.464 fused_ordering(249) 00:12:44.464 fused_ordering(250) 00:12:44.464 fused_ordering(251) 00:12:44.464 fused_ordering(252) 00:12:44.464 fused_ordering(253) 00:12:44.464 fused_ordering(254) 00:12:44.464 fused_ordering(255) 00:12:44.464 fused_ordering(256) 00:12:44.464 fused_ordering(257) 00:12:44.464 fused_ordering(258) 00:12:44.464 fused_ordering(259) 00:12:44.464 fused_ordering(260) 00:12:44.464 fused_ordering(261) 00:12:44.464 fused_ordering(262) 00:12:44.464 fused_ordering(263) 00:12:44.464 fused_ordering(264) 00:12:44.464 fused_ordering(265) 00:12:44.464 fused_ordering(266) 00:12:44.464 fused_ordering(267) 00:12:44.464 fused_ordering(268) 00:12:44.464 fused_ordering(269) 00:12:44.464 fused_ordering(270) 00:12:44.464 fused_ordering(271) 00:12:44.464 fused_ordering(272) 00:12:44.464 fused_ordering(273) 00:12:44.464 fused_ordering(274) 00:12:44.464 fused_ordering(275) 00:12:44.464 fused_ordering(276) 00:12:44.464 fused_ordering(277) 00:12:44.464 fused_ordering(278) 00:12:44.464 fused_ordering(279) 00:12:44.464 fused_ordering(280) 00:12:44.464 fused_ordering(281) 00:12:44.464 fused_ordering(282) 00:12:44.464 fused_ordering(283) 00:12:44.464 fused_ordering(284) 00:12:44.464 fused_ordering(285) 00:12:44.464 fused_ordering(286) 00:12:44.464 fused_ordering(287) 00:12:44.464 fused_ordering(288) 00:12:44.464 fused_ordering(289) 00:12:44.464 fused_ordering(290) 00:12:44.464 fused_ordering(291) 00:12:44.464 fused_ordering(292) 00:12:44.464 fused_ordering(293) 00:12:44.464 fused_ordering(294) 00:12:44.464 fused_ordering(295) 00:12:44.464 fused_ordering(296) 00:12:44.464 fused_ordering(297) 00:12:44.464 fused_ordering(298) 00:12:44.464 fused_ordering(299) 00:12:44.464 fused_ordering(300) 00:12:44.464 fused_ordering(301) 00:12:44.464 fused_ordering(302) 00:12:44.464 fused_ordering(303) 00:12:44.464 fused_ordering(304) 00:12:44.464 fused_ordering(305) 00:12:44.464 fused_ordering(306) 00:12:44.464 fused_ordering(307) 00:12:44.464 fused_ordering(308) 00:12:44.464 fused_ordering(309) 00:12:44.464 fused_ordering(310) 00:12:44.464 fused_ordering(311) 00:12:44.464 fused_ordering(312) 00:12:44.464 fused_ordering(313) 00:12:44.464 fused_ordering(314) 00:12:44.464 fused_ordering(315) 00:12:44.464 fused_ordering(316) 00:12:44.464 fused_ordering(317) 00:12:44.464 fused_ordering(318) 00:12:44.464 fused_ordering(319) 00:12:44.464 fused_ordering(320) 00:12:44.464 fused_ordering(321) 00:12:44.464 fused_ordering(322) 00:12:44.464 fused_ordering(323) 00:12:44.464 fused_ordering(324) 00:12:44.464 fused_ordering(325) 00:12:44.464 fused_ordering(326) 00:12:44.464 fused_ordering(327) 00:12:44.464 fused_ordering(328) 00:12:44.464 fused_ordering(329) 00:12:44.464 fused_ordering(330) 00:12:44.464 fused_ordering(331) 00:12:44.464 fused_ordering(332) 00:12:44.464 fused_ordering(333) 00:12:44.464 fused_ordering(334) 00:12:44.464 fused_ordering(335) 00:12:44.464 fused_ordering(336) 00:12:44.464 fused_ordering(337) 00:12:44.464 fused_ordering(338) 00:12:44.464 fused_ordering(339) 00:12:44.464 fused_ordering(340) 00:12:44.464 fused_ordering(341) 00:12:44.464 fused_ordering(342) 00:12:44.464 fused_ordering(343) 00:12:44.464 fused_ordering(344) 00:12:44.464 fused_ordering(345) 00:12:44.464 fused_ordering(346) 00:12:44.464 fused_ordering(347) 00:12:44.464 fused_ordering(348) 00:12:44.464 fused_ordering(349) 00:12:44.464 fused_ordering(350) 00:12:44.464 fused_ordering(351) 00:12:44.464 fused_ordering(352) 00:12:44.464 fused_ordering(353) 00:12:44.464 fused_ordering(354) 00:12:44.464 fused_ordering(355) 00:12:44.464 fused_ordering(356) 00:12:44.464 fused_ordering(357) 00:12:44.464 fused_ordering(358) 00:12:44.464 fused_ordering(359) 00:12:44.464 fused_ordering(360) 00:12:44.464 fused_ordering(361) 00:12:44.464 fused_ordering(362) 00:12:44.464 fused_ordering(363) 00:12:44.464 fused_ordering(364) 00:12:44.464 fused_ordering(365) 00:12:44.464 fused_ordering(366) 00:12:44.464 fused_ordering(367) 00:12:44.464 fused_ordering(368) 00:12:44.464 fused_ordering(369) 00:12:44.464 fused_ordering(370) 00:12:44.464 fused_ordering(371) 00:12:44.464 fused_ordering(372) 00:12:44.464 fused_ordering(373) 00:12:44.464 fused_ordering(374) 00:12:44.464 fused_ordering(375) 00:12:44.464 fused_ordering(376) 00:12:44.464 fused_ordering(377) 00:12:44.464 fused_ordering(378) 00:12:44.464 fused_ordering(379) 00:12:44.464 fused_ordering(380) 00:12:44.464 fused_ordering(381) 00:12:44.464 fused_ordering(382) 00:12:44.464 fused_ordering(383) 00:12:44.464 fused_ordering(384) 00:12:44.464 fused_ordering(385) 00:12:44.464 fused_ordering(386) 00:12:44.464 fused_ordering(387) 00:12:44.464 fused_ordering(388) 00:12:44.464 fused_ordering(389) 00:12:44.464 fused_ordering(390) 00:12:44.464 fused_ordering(391) 00:12:44.464 fused_ordering(392) 00:12:44.464 fused_ordering(393) 00:12:44.464 fused_ordering(394) 00:12:44.464 fused_ordering(395) 00:12:44.464 fused_ordering(396) 00:12:44.464 fused_ordering(397) 00:12:44.464 fused_ordering(398) 00:12:44.464 fused_ordering(399) 00:12:44.464 fused_ordering(400) 00:12:44.464 fused_ordering(401) 00:12:44.464 fused_ordering(402) 00:12:44.464 fused_ordering(403) 00:12:44.464 fused_ordering(404) 00:12:44.464 fused_ordering(405) 00:12:44.464 fused_ordering(406) 00:12:44.464 fused_ordering(407) 00:12:44.464 fused_ordering(408) 00:12:44.464 fused_ordering(409) 00:12:44.464 fused_ordering(410) 00:12:44.723 fused_ordering(411) 00:12:44.723 fused_ordering(412) 00:12:44.723 fused_ordering(413) 00:12:44.723 fused_ordering(414) 00:12:44.723 fused_ordering(415) 00:12:44.723 fused_ordering(416) 00:12:44.723 fused_ordering(417) 00:12:44.723 fused_ordering(418) 00:12:44.723 fused_ordering(419) 00:12:44.723 fused_ordering(420) 00:12:44.723 fused_ordering(421) 00:12:44.723 fused_ordering(422) 00:12:44.723 fused_ordering(423) 00:12:44.723 fused_ordering(424) 00:12:44.723 fused_ordering(425) 00:12:44.723 fused_ordering(426) 00:12:44.723 fused_ordering(427) 00:12:44.723 fused_ordering(428) 00:12:44.723 fused_ordering(429) 00:12:44.723 fused_ordering(430) 00:12:44.723 fused_ordering(431) 00:12:44.723 fused_ordering(432) 00:12:44.723 fused_ordering(433) 00:12:44.723 fused_ordering(434) 00:12:44.723 fused_ordering(435) 00:12:44.723 fused_ordering(436) 00:12:44.723 fused_ordering(437) 00:12:44.723 fused_ordering(438) 00:12:44.723 fused_ordering(439) 00:12:44.723 fused_ordering(440) 00:12:44.723 fused_ordering(441) 00:12:44.723 fused_ordering(442) 00:12:44.723 fused_ordering(443) 00:12:44.723 fused_ordering(444) 00:12:44.723 fused_ordering(445) 00:12:44.723 fused_ordering(446) 00:12:44.723 fused_ordering(447) 00:12:44.723 fused_ordering(448) 00:12:44.723 fused_ordering(449) 00:12:44.723 fused_ordering(450) 00:12:44.724 fused_ordering(451) 00:12:44.724 fused_ordering(452) 00:12:44.724 fused_ordering(453) 00:12:44.724 fused_ordering(454) 00:12:44.724 fused_ordering(455) 00:12:44.724 fused_ordering(456) 00:12:44.724 fused_ordering(457) 00:12:44.724 fused_ordering(458) 00:12:44.724 fused_ordering(459) 00:12:44.724 fused_ordering(460) 00:12:44.724 fused_ordering(461) 00:12:44.724 fused_ordering(462) 00:12:44.724 fused_ordering(463) 00:12:44.724 fused_ordering(464) 00:12:44.724 fused_ordering(465) 00:12:44.724 fused_ordering(466) 00:12:44.724 fused_ordering(467) 00:12:44.724 fused_ordering(468) 00:12:44.724 fused_ordering(469) 00:12:44.724 fused_ordering(470) 00:12:44.724 fused_ordering(471) 00:12:44.724 fused_ordering(472) 00:12:44.724 fused_ordering(473) 00:12:44.724 fused_ordering(474) 00:12:44.724 fused_ordering(475) 00:12:44.724 fused_ordering(476) 00:12:44.724 fused_ordering(477) 00:12:44.724 fused_ordering(478) 00:12:44.724 fused_ordering(479) 00:12:44.724 fused_ordering(480) 00:12:44.724 fused_ordering(481) 00:12:44.724 fused_ordering(482) 00:12:44.724 fused_ordering(483) 00:12:44.724 fused_ordering(484) 00:12:44.724 fused_ordering(485) 00:12:44.724 fused_ordering(486) 00:12:44.724 fused_ordering(487) 00:12:44.724 fused_ordering(488) 00:12:44.724 fused_ordering(489) 00:12:44.724 fused_ordering(490) 00:12:44.724 fused_ordering(491) 00:12:44.724 fused_ordering(492) 00:12:44.724 fused_ordering(493) 00:12:44.724 fused_ordering(494) 00:12:44.724 fused_ordering(495) 00:12:44.724 fused_ordering(496) 00:12:44.724 fused_ordering(497) 00:12:44.724 fused_ordering(498) 00:12:44.724 fused_ordering(499) 00:12:44.724 fused_ordering(500) 00:12:44.724 fused_ordering(501) 00:12:44.724 fused_ordering(502) 00:12:44.724 fused_ordering(503) 00:12:44.724 fused_ordering(504) 00:12:44.724 fused_ordering(505) 00:12:44.724 fused_ordering(506) 00:12:44.724 fused_ordering(507) 00:12:44.724 fused_ordering(508) 00:12:44.724 fused_ordering(509) 00:12:44.724 fused_ordering(510) 00:12:44.724 fused_ordering(511) 00:12:44.724 fused_ordering(512) 00:12:44.724 fused_ordering(513) 00:12:44.724 fused_ordering(514) 00:12:44.724 fused_ordering(515) 00:12:44.724 fused_ordering(516) 00:12:44.724 fused_ordering(517) 00:12:44.724 fused_ordering(518) 00:12:44.724 fused_ordering(519) 00:12:44.724 fused_ordering(520) 00:12:44.724 fused_ordering(521) 00:12:44.724 fused_ordering(522) 00:12:44.724 fused_ordering(523) 00:12:44.724 fused_ordering(524) 00:12:44.724 fused_ordering(525) 00:12:44.724 fused_ordering(526) 00:12:44.724 fused_ordering(527) 00:12:44.724 fused_ordering(528) 00:12:44.724 fused_ordering(529) 00:12:44.724 fused_ordering(530) 00:12:44.724 fused_ordering(531) 00:12:44.724 fused_ordering(532) 00:12:44.724 fused_ordering(533) 00:12:44.724 fused_ordering(534) 00:12:44.724 fused_ordering(535) 00:12:44.724 fused_ordering(536) 00:12:44.724 fused_ordering(537) 00:12:44.724 fused_ordering(538) 00:12:44.724 fused_ordering(539) 00:12:44.724 fused_ordering(540) 00:12:44.724 fused_ordering(541) 00:12:44.724 fused_ordering(542) 00:12:44.724 fused_ordering(543) 00:12:44.724 fused_ordering(544) 00:12:44.724 fused_ordering(545) 00:12:44.724 fused_ordering(546) 00:12:44.724 fused_ordering(547) 00:12:44.724 fused_ordering(548) 00:12:44.724 fused_ordering(549) 00:12:44.724 fused_ordering(550) 00:12:44.724 fused_ordering(551) 00:12:44.724 fused_ordering(552) 00:12:44.724 fused_ordering(553) 00:12:44.724 fused_ordering(554) 00:12:44.724 fused_ordering(555) 00:12:44.724 fused_ordering(556) 00:12:44.724 fused_ordering(557) 00:12:44.724 fused_ordering(558) 00:12:44.724 fused_ordering(559) 00:12:44.724 fused_ordering(560) 00:12:44.724 fused_ordering(561) 00:12:44.724 fused_ordering(562) 00:12:44.724 fused_ordering(563) 00:12:44.724 fused_ordering(564) 00:12:44.724 fused_ordering(565) 00:12:44.724 fused_ordering(566) 00:12:44.724 fused_ordering(567) 00:12:44.724 fused_ordering(568) 00:12:44.724 fused_ordering(569) 00:12:44.724 fused_ordering(570) 00:12:44.724 fused_ordering(571) 00:12:44.724 fused_ordering(572) 00:12:44.724 fused_ordering(573) 00:12:44.724 fused_ordering(574) 00:12:44.724 fused_ordering(575) 00:12:44.724 fused_ordering(576) 00:12:44.724 fused_ordering(577) 00:12:44.724 fused_ordering(578) 00:12:44.724 fused_ordering(579) 00:12:44.724 fused_ordering(580) 00:12:44.724 fused_ordering(581) 00:12:44.724 fused_ordering(582) 00:12:44.724 fused_ordering(583) 00:12:44.724 fused_ordering(584) 00:12:44.724 fused_ordering(585) 00:12:44.724 fused_ordering(586) 00:12:44.724 fused_ordering(587) 00:12:44.724 fused_ordering(588) 00:12:44.724 fused_ordering(589) 00:12:44.724 fused_ordering(590) 00:12:44.724 fused_ordering(591) 00:12:44.724 fused_ordering(592) 00:12:44.724 fused_ordering(593) 00:12:44.724 fused_ordering(594) 00:12:44.724 fused_ordering(595) 00:12:44.724 fused_ordering(596) 00:12:44.724 fused_ordering(597) 00:12:44.724 fused_ordering(598) 00:12:44.724 fused_ordering(599) 00:12:44.724 fused_ordering(600) 00:12:44.724 fused_ordering(601) 00:12:44.724 fused_ordering(602) 00:12:44.724 fused_ordering(603) 00:12:44.724 fused_ordering(604) 00:12:44.724 fused_ordering(605) 00:12:44.724 fused_ordering(606) 00:12:44.724 fused_ordering(607) 00:12:44.724 fused_ordering(608) 00:12:44.724 fused_ordering(609) 00:12:44.724 fused_ordering(610) 00:12:44.724 fused_ordering(611) 00:12:44.724 fused_ordering(612) 00:12:44.724 fused_ordering(613) 00:12:44.724 fused_ordering(614) 00:12:44.724 fused_ordering(615) 00:12:45.291 fused_ordering(616) 00:12:45.291 fused_ordering(617) 00:12:45.291 fused_ordering(618) 00:12:45.291 fused_ordering(619) 00:12:45.291 fused_ordering(620) 00:12:45.291 fused_ordering(621) 00:12:45.291 fused_ordering(622) 00:12:45.291 fused_ordering(623) 00:12:45.291 fused_ordering(624) 00:12:45.291 fused_ordering(625) 00:12:45.291 fused_ordering(626) 00:12:45.291 fused_ordering(627) 00:12:45.291 fused_ordering(628) 00:12:45.291 fused_ordering(629) 00:12:45.291 fused_ordering(630) 00:12:45.291 fused_ordering(631) 00:12:45.291 fused_ordering(632) 00:12:45.291 fused_ordering(633) 00:12:45.291 fused_ordering(634) 00:12:45.291 fused_ordering(635) 00:12:45.291 fused_ordering(636) 00:12:45.291 fused_ordering(637) 00:12:45.291 fused_ordering(638) 00:12:45.291 fused_ordering(639) 00:12:45.291 fused_ordering(640) 00:12:45.291 fused_ordering(641) 00:12:45.291 fused_ordering(642) 00:12:45.291 fused_ordering(643) 00:12:45.291 fused_ordering(644) 00:12:45.291 fused_ordering(645) 00:12:45.291 fused_ordering(646) 00:12:45.291 fused_ordering(647) 00:12:45.291 fused_ordering(648) 00:12:45.291 fused_ordering(649) 00:12:45.291 fused_ordering(650) 00:12:45.291 fused_ordering(651) 00:12:45.291 fused_ordering(652) 00:12:45.291 fused_ordering(653) 00:12:45.291 fused_ordering(654) 00:12:45.291 fused_ordering(655) 00:12:45.291 fused_ordering(656) 00:12:45.291 fused_ordering(657) 00:12:45.291 fused_ordering(658) 00:12:45.291 fused_ordering(659) 00:12:45.291 fused_ordering(660) 00:12:45.291 fused_ordering(661) 00:12:45.291 fused_ordering(662) 00:12:45.291 fused_ordering(663) 00:12:45.291 fused_ordering(664) 00:12:45.291 fused_ordering(665) 00:12:45.291 fused_ordering(666) 00:12:45.291 fused_ordering(667) 00:12:45.291 fused_ordering(668) 00:12:45.291 fused_ordering(669) 00:12:45.291 fused_ordering(670) 00:12:45.291 fused_ordering(671) 00:12:45.291 fused_ordering(672) 00:12:45.291 fused_ordering(673) 00:12:45.291 fused_ordering(674) 00:12:45.291 fused_ordering(675) 00:12:45.291 fused_ordering(676) 00:12:45.291 fused_ordering(677) 00:12:45.291 fused_ordering(678) 00:12:45.291 fused_ordering(679) 00:12:45.291 fused_ordering(680) 00:12:45.291 fused_ordering(681) 00:12:45.291 fused_ordering(682) 00:12:45.291 fused_ordering(683) 00:12:45.291 fused_ordering(684) 00:12:45.291 fused_ordering(685) 00:12:45.291 fused_ordering(686) 00:12:45.291 fused_ordering(687) 00:12:45.291 fused_ordering(688) 00:12:45.291 fused_ordering(689) 00:12:45.291 fused_ordering(690) 00:12:45.291 fused_ordering(691) 00:12:45.291 fused_ordering(692) 00:12:45.291 fused_ordering(693) 00:12:45.291 fused_ordering(694) 00:12:45.291 fused_ordering(695) 00:12:45.291 fused_ordering(696) 00:12:45.291 fused_ordering(697) 00:12:45.291 fused_ordering(698) 00:12:45.291 fused_ordering(699) 00:12:45.291 fused_ordering(700) 00:12:45.291 fused_ordering(701) 00:12:45.292 fused_ordering(702) 00:12:45.292 fused_ordering(703) 00:12:45.292 fused_ordering(704) 00:12:45.292 fused_ordering(705) 00:12:45.292 fused_ordering(706) 00:12:45.292 fused_ordering(707) 00:12:45.292 fused_ordering(708) 00:12:45.292 fused_ordering(709) 00:12:45.292 fused_ordering(710) 00:12:45.292 fused_ordering(711) 00:12:45.292 fused_ordering(712) 00:12:45.292 fused_ordering(713) 00:12:45.292 fused_ordering(714) 00:12:45.292 fused_ordering(715) 00:12:45.292 fused_ordering(716) 00:12:45.292 fused_ordering(717) 00:12:45.292 fused_ordering(718) 00:12:45.292 fused_ordering(719) 00:12:45.292 fused_ordering(720) 00:12:45.292 fused_ordering(721) 00:12:45.292 fused_ordering(722) 00:12:45.292 fused_ordering(723) 00:12:45.292 fused_ordering(724) 00:12:45.292 fused_ordering(725) 00:12:45.292 fused_ordering(726) 00:12:45.292 fused_ordering(727) 00:12:45.292 fused_ordering(728) 00:12:45.292 fused_ordering(729) 00:12:45.292 fused_ordering(730) 00:12:45.292 fused_ordering(731) 00:12:45.292 fused_ordering(732) 00:12:45.292 fused_ordering(733) 00:12:45.292 fused_ordering(734) 00:12:45.292 fused_ordering(735) 00:12:45.292 fused_ordering(736) 00:12:45.292 fused_ordering(737) 00:12:45.292 fused_ordering(738) 00:12:45.292 fused_ordering(739) 00:12:45.292 fused_ordering(740) 00:12:45.292 fused_ordering(741) 00:12:45.292 fused_ordering(742) 00:12:45.292 fused_ordering(743) 00:12:45.292 fused_ordering(744) 00:12:45.292 fused_ordering(745) 00:12:45.292 fused_ordering(746) 00:12:45.292 fused_ordering(747) 00:12:45.292 fused_ordering(748) 00:12:45.292 fused_ordering(749) 00:12:45.292 fused_ordering(750) 00:12:45.292 fused_ordering(751) 00:12:45.292 fused_ordering(752) 00:12:45.292 fused_ordering(753) 00:12:45.292 fused_ordering(754) 00:12:45.292 fused_ordering(755) 00:12:45.292 fused_ordering(756) 00:12:45.292 fused_ordering(757) 00:12:45.292 fused_ordering(758) 00:12:45.292 fused_ordering(759) 00:12:45.292 fused_ordering(760) 00:12:45.292 fused_ordering(761) 00:12:45.292 fused_ordering(762) 00:12:45.292 fused_ordering(763) 00:12:45.292 fused_ordering(764) 00:12:45.292 fused_ordering(765) 00:12:45.292 fused_ordering(766) 00:12:45.292 fused_ordering(767) 00:12:45.292 fused_ordering(768) 00:12:45.292 fused_ordering(769) 00:12:45.292 fused_ordering(770) 00:12:45.292 fused_ordering(771) 00:12:45.292 fused_ordering(772) 00:12:45.292 fused_ordering(773) 00:12:45.292 fused_ordering(774) 00:12:45.292 fused_ordering(775) 00:12:45.292 fused_ordering(776) 00:12:45.292 fused_ordering(777) 00:12:45.292 fused_ordering(778) 00:12:45.292 fused_ordering(779) 00:12:45.292 fused_ordering(780) 00:12:45.292 fused_ordering(781) 00:12:45.292 fused_ordering(782) 00:12:45.292 fused_ordering(783) 00:12:45.292 fused_ordering(784) 00:12:45.292 fused_ordering(785) 00:12:45.292 fused_ordering(786) 00:12:45.292 fused_ordering(787) 00:12:45.292 fused_ordering(788) 00:12:45.292 fused_ordering(789) 00:12:45.292 fused_ordering(790) 00:12:45.292 fused_ordering(791) 00:12:45.292 fused_ordering(792) 00:12:45.292 fused_ordering(793) 00:12:45.292 fused_ordering(794) 00:12:45.292 fused_ordering(795) 00:12:45.292 fused_ordering(796) 00:12:45.292 fused_ordering(797) 00:12:45.292 fused_ordering(798) 00:12:45.292 fused_ordering(799) 00:12:45.292 fused_ordering(800) 00:12:45.292 fused_ordering(801) 00:12:45.292 fused_ordering(802) 00:12:45.292 fused_ordering(803) 00:12:45.292 fused_ordering(804) 00:12:45.292 fused_ordering(805) 00:12:45.292 fused_ordering(806) 00:12:45.292 fused_ordering(807) 00:12:45.292 fused_ordering(808) 00:12:45.292 fused_ordering(809) 00:12:45.292 fused_ordering(810) 00:12:45.292 fused_ordering(811) 00:12:45.292 fused_ordering(812) 00:12:45.292 fused_ordering(813) 00:12:45.292 fused_ordering(814) 00:12:45.292 fused_ordering(815) 00:12:45.292 fused_ordering(816) 00:12:45.292 fused_ordering(817) 00:12:45.292 fused_ordering(818) 00:12:45.292 fused_ordering(819) 00:12:45.292 fused_ordering(820) 00:12:45.859 fused_ordering(821) 00:12:45.859 fused_ordering(822) 00:12:45.859 fused_ordering(823) 00:12:45.859 fused_ordering(824) 00:12:45.859 fused_ordering(825) 00:12:45.859 fused_ordering(826) 00:12:45.859 fused_ordering(827) 00:12:45.859 fused_ordering(828) 00:12:45.859 fused_ordering(829) 00:12:45.859 fused_ordering(830) 00:12:45.859 fused_ordering(831) 00:12:45.859 fused_ordering(832) 00:12:45.859 fused_ordering(833) 00:12:45.859 fused_ordering(834) 00:12:45.859 fused_ordering(835) 00:12:45.859 fused_ordering(836) 00:12:45.859 fused_ordering(837) 00:12:45.859 fused_ordering(838) 00:12:45.859 fused_ordering(839) 00:12:45.859 fused_ordering(840) 00:12:45.859 fused_ordering(841) 00:12:45.859 fused_ordering(842) 00:12:45.859 fused_ordering(843) 00:12:45.859 fused_ordering(844) 00:12:45.859 fused_ordering(845) 00:12:45.859 fused_ordering(846) 00:12:45.859 fused_ordering(847) 00:12:45.859 fused_ordering(848) 00:12:45.859 fused_ordering(849) 00:12:45.859 fused_ordering(850) 00:12:45.859 fused_ordering(851) 00:12:45.859 fused_ordering(852) 00:12:45.859 fused_ordering(853) 00:12:45.859 fused_ordering(854) 00:12:45.859 fused_ordering(855) 00:12:45.859 fused_ordering(856) 00:12:45.860 fused_ordering(857) 00:12:45.860 fused_ordering(858) 00:12:45.860 fused_ordering(859) 00:12:45.860 fused_ordering(860) 00:12:45.860 fused_ordering(861) 00:12:45.860 fused_ordering(862) 00:12:45.860 fused_ordering(863) 00:12:45.860 fused_ordering(864) 00:12:45.860 fused_ordering(865) 00:12:45.860 fused_ordering(866) 00:12:45.860 fused_ordering(867) 00:12:45.860 fused_ordering(868) 00:12:45.860 fused_ordering(869) 00:12:45.860 fused_ordering(870) 00:12:45.860 fused_ordering(871) 00:12:45.860 fused_ordering(872) 00:12:45.860 fused_ordering(873) 00:12:45.860 fused_ordering(874) 00:12:45.860 fused_ordering(875) 00:12:45.860 fused_ordering(876) 00:12:45.860 fused_ordering(877) 00:12:45.860 fused_ordering(878) 00:12:45.860 fused_ordering(879) 00:12:45.860 fused_ordering(880) 00:12:45.860 fused_ordering(881) 00:12:45.860 fused_ordering(882) 00:12:45.860 fused_ordering(883) 00:12:45.860 fused_ordering(884) 00:12:45.860 fused_ordering(885) 00:12:45.860 fused_ordering(886) 00:12:45.860 fused_ordering(887) 00:12:45.860 fused_ordering(888) 00:12:45.860 fused_ordering(889) 00:12:45.860 fused_ordering(890) 00:12:45.860 fused_ordering(891) 00:12:45.860 fused_ordering(892) 00:12:45.860 fused_ordering(893) 00:12:45.860 fused_ordering(894) 00:12:45.860 fused_ordering(895) 00:12:45.860 fused_ordering(896) 00:12:45.860 fused_ordering(897) 00:12:45.860 fused_ordering(898) 00:12:45.860 fused_ordering(899) 00:12:45.860 fused_ordering(900) 00:12:45.860 fused_ordering(901) 00:12:45.860 fused_ordering(902) 00:12:45.860 fused_ordering(903) 00:12:45.860 fused_ordering(904) 00:12:45.860 fused_ordering(905) 00:12:45.860 fused_ordering(906) 00:12:45.860 fused_ordering(907) 00:12:45.860 fused_ordering(908) 00:12:45.860 fused_ordering(909) 00:12:45.860 fused_ordering(910) 00:12:45.860 fused_ordering(911) 00:12:45.860 fused_ordering(912) 00:12:45.860 fused_ordering(913) 00:12:45.860 fused_ordering(914) 00:12:45.860 fused_ordering(915) 00:12:45.860 fused_ordering(916) 00:12:45.860 fused_ordering(917) 00:12:45.860 fused_ordering(918) 00:12:45.860 fused_ordering(919) 00:12:45.860 fused_ordering(920) 00:12:45.860 fused_ordering(921) 00:12:45.860 fused_ordering(922) 00:12:45.860 fused_ordering(923) 00:12:45.860 fused_ordering(924) 00:12:45.860 fused_ordering(925) 00:12:45.860 fused_ordering(926) 00:12:45.860 fused_ordering(927) 00:12:45.860 fused_ordering(928) 00:12:45.860 fused_ordering(929) 00:12:45.860 fused_ordering(930) 00:12:45.860 fused_ordering(931) 00:12:45.860 fused_ordering(932) 00:12:45.860 fused_ordering(933) 00:12:45.860 fused_ordering(934) 00:12:45.860 fused_ordering(935) 00:12:45.860 fused_ordering(936) 00:12:45.860 fused_ordering(937) 00:12:45.860 fused_ordering(938) 00:12:45.860 fused_ordering(939) 00:12:45.860 fused_ordering(940) 00:12:45.860 fused_ordering(941) 00:12:45.860 fused_ordering(942) 00:12:45.860 fused_ordering(943) 00:12:45.860 fused_ordering(944) 00:12:45.860 fused_ordering(945) 00:12:45.860 fused_ordering(946) 00:12:45.860 fused_ordering(947) 00:12:45.860 fused_ordering(948) 00:12:45.860 fused_ordering(949) 00:12:45.860 fused_ordering(950) 00:12:45.860 fused_ordering(951) 00:12:45.860 fused_ordering(952) 00:12:45.860 fused_ordering(953) 00:12:45.860 fused_ordering(954) 00:12:45.860 fused_ordering(955) 00:12:45.860 fused_ordering(956) 00:12:45.860 fused_ordering(957) 00:12:45.860 fused_ordering(958) 00:12:45.860 fused_ordering(959) 00:12:45.860 fused_ordering(960) 00:12:45.860 fused_ordering(961) 00:12:45.860 fused_ordering(962) 00:12:45.860 fused_ordering(963) 00:12:45.860 fused_ordering(964) 00:12:45.860 fused_ordering(965) 00:12:45.860 fused_ordering(966) 00:12:45.860 fused_ordering(967) 00:12:45.860 fused_ordering(968) 00:12:45.860 fused_ordering(969) 00:12:45.860 fused_ordering(970) 00:12:45.860 fused_ordering(971) 00:12:45.860 fused_ordering(972) 00:12:45.860 fused_ordering(973) 00:12:45.860 fused_ordering(974) 00:12:45.860 fused_ordering(975) 00:12:45.860 fused_ordering(976) 00:12:45.860 fused_ordering(977) 00:12:45.860 fused_ordering(978) 00:12:45.860 fused_ordering(979) 00:12:45.860 fused_ordering(980) 00:12:45.860 fused_ordering(981) 00:12:45.860 fused_ordering(982) 00:12:45.860 fused_ordering(983) 00:12:45.860 fused_ordering(984) 00:12:45.860 fused_ordering(985) 00:12:45.860 fused_ordering(986) 00:12:45.860 fused_ordering(987) 00:12:45.860 fused_ordering(988) 00:12:45.860 fused_ordering(989) 00:12:45.860 fused_ordering(990) 00:12:45.860 fused_ordering(991) 00:12:45.860 fused_ordering(992) 00:12:45.860 fused_ordering(993) 00:12:45.860 fused_ordering(994) 00:12:45.860 fused_ordering(995) 00:12:45.860 fused_ordering(996) 00:12:45.860 fused_ordering(997) 00:12:45.860 fused_ordering(998) 00:12:45.860 fused_ordering(999) 00:12:45.860 fused_ordering(1000) 00:12:45.860 fused_ordering(1001) 00:12:45.860 fused_ordering(1002) 00:12:45.860 fused_ordering(1003) 00:12:45.860 fused_ordering(1004) 00:12:45.860 fused_ordering(1005) 00:12:45.860 fused_ordering(1006) 00:12:45.860 fused_ordering(1007) 00:12:45.860 fused_ordering(1008) 00:12:45.860 fused_ordering(1009) 00:12:45.860 fused_ordering(1010) 00:12:45.860 fused_ordering(1011) 00:12:45.860 fused_ordering(1012) 00:12:45.860 fused_ordering(1013) 00:12:45.860 fused_ordering(1014) 00:12:45.860 fused_ordering(1015) 00:12:45.860 fused_ordering(1016) 00:12:45.860 fused_ordering(1017) 00:12:45.860 fused_ordering(1018) 00:12:45.860 fused_ordering(1019) 00:12:45.860 fused_ordering(1020) 00:12:45.860 fused_ordering(1021) 00:12:45.860 fused_ordering(1022) 00:12:45.860 fused_ordering(1023) 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.860 rmmod nvme_tcp 00:12:45.860 rmmod nvme_fabrics 00:12:45.860 rmmod nvme_keyring 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2111411 ']' 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2111411 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2111411 ']' 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2111411 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111411 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111411' 00:12:45.860 killing process with pid 2111411 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2111411 00:12:45.860 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2111411 00:12:46.120 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.120 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.121 15:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.029 00:12:48.029 real 0m10.780s 00:12:48.029 user 0m5.124s 00:12:48.029 sys 0m5.845s 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:48.029 ************************************ 00:12:48.029 END TEST nvmf_fused_ordering 00:12:48.029 ************************************ 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.029 ************************************ 00:12:48.029 START TEST nvmf_ns_masking 00:12:48.029 ************************************ 00:12:48.029 15:21:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:48.289 * Looking for test storage... 00:12:48.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.290 --rc genhtml_branch_coverage=1 00:12:48.290 --rc genhtml_function_coverage=1 00:12:48.290 --rc genhtml_legend=1 00:12:48.290 --rc geninfo_all_blocks=1 00:12:48.290 --rc geninfo_unexecuted_blocks=1 00:12:48.290 00:12:48.290 ' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.290 --rc genhtml_branch_coverage=1 00:12:48.290 --rc genhtml_function_coverage=1 00:12:48.290 --rc genhtml_legend=1 00:12:48.290 --rc geninfo_all_blocks=1 00:12:48.290 --rc geninfo_unexecuted_blocks=1 00:12:48.290 00:12:48.290 ' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.290 --rc genhtml_branch_coverage=1 00:12:48.290 --rc genhtml_function_coverage=1 00:12:48.290 --rc genhtml_legend=1 00:12:48.290 --rc geninfo_all_blocks=1 00:12:48.290 --rc geninfo_unexecuted_blocks=1 00:12:48.290 00:12:48.290 ' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.290 --rc genhtml_branch_coverage=1 00:12:48.290 --rc genhtml_function_coverage=1 00:12:48.290 --rc genhtml_legend=1 00:12:48.290 --rc geninfo_all_blocks=1 00:12:48.290 --rc geninfo_unexecuted_blocks=1 00:12:48.290 00:12:48.290 ' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=038a9ef3-d887-4a19-84be-da78a8dd690c 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6ce33d95-6611-4eb9-9560-bcf05908f7de 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b868054f-eb5c-40c2-8a10-3f2a85e0c37f 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.290 15:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.861 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.862 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.862 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.862 15:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:12:54.862 00:12:54.862 --- 10.0.0.2 ping statistics --- 00:12:54.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.862 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:12:54.862 00:12:54.862 --- 10.0.0.1 ping statistics --- 00:12:54.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.862 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2115282 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2115282 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2115282 ']' 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.862 [2024-11-20 15:21:58.145637] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:12:54.862 [2024-11-20 15:21:58.145691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.862 [2024-11-20 15:21:58.226689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.862 [2024-11-20 15:21:58.268267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.862 [2024-11-20 15:21:58.268305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.862 [2024-11-20 15:21:58.268312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.862 [2024-11-20 15:21:58.268318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.862 [2024-11-20 15:21:58.268323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.862 [2024-11-20 15:21:58.268879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.862 [2024-11-20 15:21:58.565809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:54.862 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:55.121 Malloc1 00:12:55.121 15:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.121 Malloc2 00:12:55.121 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.380 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:55.637 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.896 [2024-11-20 15:21:59.606924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b868054f-eb5c-40c2-8a10-3f2a85e0c37f -a 10.0.0.2 -s 4420 -i 4 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:55.896 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.428 [ 0]:0x1 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=751262c25aeb41b0bf571adc669952b7 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 751262c25aeb41b0bf571adc669952b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.428 15:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.428 [ 0]:0x1 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=751262c25aeb41b0bf571adc669952b7 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 751262c25aeb41b0bf571adc669952b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.428 [ 1]:0x2 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.428 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.687 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:58.946 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:58.946 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b868054f-eb5c-40c2-8a10-3f2a85e0c37f -a 10.0.0.2 -s 4420 -i 4 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:59.205 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.107 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.107 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:01.365 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.365 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.365 [ 0]:0x2 00:13:01.365 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.365 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.365 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.366 [ 0]:0x1 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.366 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=751262c25aeb41b0bf571adc669952b7 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 751262c25aeb41b0bf571adc669952b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.624 [ 1]:0x2 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.624 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.883 [ 0]:0x2 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.883 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:01.884 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.884 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:02.142 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:02.142 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b868054f-eb5c-40c2-8a10-3f2a85e0c37f -a 10.0.0.2 -s 4420 -i 4 00:13:02.400 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.401 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.401 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.401 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:02.401 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:02.401 15:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:04.304 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.563 [ 0]:0x1 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=751262c25aeb41b0bf571adc669952b7 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 751262c25aeb41b0bf571adc669952b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.563 [ 1]:0x2 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:04.563 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.822 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.823 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.823 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:04.823 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.823 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.082 [ 0]:0x2 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.082 [2024-11-20 15:22:08.945463] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:05.082 request: 00:13:05.082 { 00:13:05.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.082 "nsid": 2, 00:13:05.082 "host": "nqn.2016-06.io.spdk:host1", 00:13:05.082 "method": "nvmf_ns_remove_host", 00:13:05.082 "req_id": 1 00:13:05.082 } 00:13:05.082 Got JSON-RPC error response 00:13:05.082 response: 00:13:05.082 { 00:13:05.082 "code": -32602, 00:13:05.082 "message": "Invalid parameters" 00:13:05.082 } 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.082 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.341 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.341 15:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.341 [ 0]:0x2 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=749cfd73dcc749b6a5701aef223d6c2c 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 749cfd73dcc749b6a5701aef223d6c2c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2117206 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2117206 /var/tmp/host.sock 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2117206 ']' 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:05.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.341 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.341 [2024-11-20 15:22:09.159396] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:05.341 [2024-11-20 15:22:09.159442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117206 ] 00:13:05.341 [2024-11-20 15:22:09.234118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.600 [2024-11-20 15:22:09.276068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.600 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.600 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:05.600 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.858 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.116 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 038a9ef3-d887-4a19-84be-da78a8dd690c 00:13:06.116 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.116 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 038A9EF3D8874A1984BEDA78A8DD690C -i 00:13:06.374 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6ce33d95-6611-4eb9-9560-bcf05908f7de 00:13:06.374 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.374 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6CE33D9566114EB99560BCF05908F7DE -i 00:13:06.632 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:06.632 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:06.890 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:06.890 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:07.147 nvme0n1 00:13:07.147 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:07.147 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:07.712 nvme1n2 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:07.712 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:07.972 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 038a9ef3-d887-4a19-84be-da78a8dd690c == \0\3\8\a\9\e\f\3\-\d\8\8\7\-\4\a\1\9\-\8\4\b\e\-\d\a\7\8\a\8\d\d\6\9\0\c ]] 00:13:07.972 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:07.972 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:07.972 15:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:08.231 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6ce33d95-6611-4eb9-9560-bcf05908f7de == \6\c\e\3\3\d\9\5\-\6\6\1\1\-\4\e\b\9\-\9\5\6\0\-\b\c\f\0\5\9\0\8\f\7\d\e ]] 00:13:08.231 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 038a9ef3-d887-4a19-84be-da78a8dd690c 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 038A9EF3D8874A1984BEDA78A8DD690C 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 038A9EF3D8874A1984BEDA78A8DD690C 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.491 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:08.748 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 038A9EF3D8874A1984BEDA78A8DD690C 00:13:08.748 [2024-11-20 15:22:12.571526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:08.748 [2024-11-20 15:22:12.571557] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:08.748 [2024-11-20 15:22:12.571565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.748 request: 00:13:08.748 { 00:13:08.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.748 "namespace": { 00:13:08.748 "bdev_name": "invalid", 00:13:08.748 "nsid": 1, 00:13:08.748 "nguid": "038A9EF3D8874A1984BEDA78A8DD690C", 00:13:08.749 "no_auto_visible": false 00:13:08.749 }, 00:13:08.749 "method": "nvmf_subsystem_add_ns", 00:13:08.749 "req_id": 1 00:13:08.749 } 00:13:08.749 Got JSON-RPC error response 00:13:08.749 response: 00:13:08.749 { 00:13:08.749 "code": -32602, 00:13:08.749 "message": "Invalid parameters" 00:13:08.749 } 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 038a9ef3-d887-4a19-84be-da78a8dd690c 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:08.749 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 038A9EF3D8874A1984BEDA78A8DD690C -i 00:13:09.006 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:10.910 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:10.910 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:10.910 15:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2117206 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2117206 ']' 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2117206 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.169 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117206 00:13:11.427 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:11.427 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:11.427 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117206' 00:13:11.427 killing process with pid 2117206 00:13:11.427 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2117206 00:13:11.427 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2117206 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.686 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.686 rmmod nvme_tcp 00:13:11.945 rmmod nvme_fabrics 00:13:11.945 rmmod nvme_keyring 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2115282 ']' 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2115282 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2115282 ']' 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2115282 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2115282 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2115282' 00:13:11.945 killing process with pid 2115282 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2115282 00:13:11.945 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2115282 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.206 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.113 00:13:14.113 real 0m26.044s 00:13:14.113 user 0m31.156s 00:13:14.113 sys 0m7.107s 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:14.113 ************************************ 00:13:14.113 END TEST nvmf_ns_masking 00:13:14.113 ************************************ 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:14.113 15:22:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.113 15:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.113 15:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.373 ************************************ 00:13:14.373 START TEST nvmf_nvme_cli 00:13:14.373 ************************************ 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:14.373 * Looking for test storage... 00:13:14.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.373 --rc genhtml_branch_coverage=1 00:13:14.373 --rc genhtml_function_coverage=1 00:13:14.373 --rc genhtml_legend=1 00:13:14.373 --rc geninfo_all_blocks=1 00:13:14.373 --rc geninfo_unexecuted_blocks=1 00:13:14.373 00:13:14.373 ' 00:13:14.373 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.373 --rc genhtml_branch_coverage=1 00:13:14.374 --rc genhtml_function_coverage=1 00:13:14.374 --rc genhtml_legend=1 00:13:14.374 --rc geninfo_all_blocks=1 00:13:14.374 --rc geninfo_unexecuted_blocks=1 00:13:14.374 00:13:14.374 ' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.374 --rc genhtml_branch_coverage=1 00:13:14.374 --rc genhtml_function_coverage=1 00:13:14.374 --rc genhtml_legend=1 00:13:14.374 --rc geninfo_all_blocks=1 00:13:14.374 --rc geninfo_unexecuted_blocks=1 00:13:14.374 00:13:14.374 ' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.374 --rc genhtml_branch_coverage=1 00:13:14.374 --rc genhtml_function_coverage=1 00:13:14.374 --rc genhtml_legend=1 00:13:14.374 --rc geninfo_all_blocks=1 00:13:14.374 --rc geninfo_unexecuted_blocks=1 00:13:14.374 00:13:14.374 ' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.374 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.950 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:20.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:20.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:20.951 Found net devices under 0000:86:00.0: cvl_0_0 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:20.951 Found net devices under 0000:86:00.1: cvl_0_1 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.951 15:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:13:20.951 00:13:20.951 --- 10.0.0.2 ping statistics --- 00:13:20.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.951 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:20.951 00:13:20.951 --- 10.0.0.1 ping statistics --- 00:13:20.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.951 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2121920 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2121920 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2121920 ']' 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.951 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.951 [2024-11-20 15:22:24.253557] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:20.952 [2024-11-20 15:22:24.253600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.952 [2024-11-20 15:22:24.332977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.952 [2024-11-20 15:22:24.376856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.952 [2024-11-20 15:22:24.376895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.952 [2024-11-20 15:22:24.376902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.952 [2024-11-20 15:22:24.376908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.952 [2024-11-20 15:22:24.376913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.952 [2024-11-20 15:22:24.378511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.952 [2024-11-20 15:22:24.378628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.952 [2024-11-20 15:22:24.378738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.952 [2024-11-20 15:22:24.378739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 [2024-11-20 15:22:24.520292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 Malloc0 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 Malloc1 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 [2024-11-20 15:22:24.616082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:20.952 00:13:20.952 Discovery Log Number of Records 2, Generation counter 2 00:13:20.952 =====Discovery Log Entry 0====== 00:13:20.952 trtype: tcp 00:13:20.952 adrfam: ipv4 00:13:20.952 subtype: current discovery subsystem 00:13:20.952 treq: not required 00:13:20.952 portid: 0 00:13:20.952 trsvcid: 4420 00:13:20.952 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.952 traddr: 10.0.0.2 00:13:20.952 eflags: explicit discovery connections, duplicate discovery information 00:13:20.952 sectype: none 00:13:20.952 =====Discovery Log Entry 1====== 00:13:20.952 trtype: tcp 00:13:20.952 adrfam: ipv4 00:13:20.952 subtype: nvme subsystem 00:13:20.952 treq: not required 00:13:20.952 portid: 0 00:13:20.952 trsvcid: 4420 00:13:20.952 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:20.952 traddr: 10.0.0.2 00:13:20.952 eflags: none 00:13:20.952 sectype: none 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:20.952 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:22.328 15:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:24.228 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.228 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:24.486 /dev/nvme0n2 ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:24.486 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.745 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.745 rmmod nvme_tcp 00:13:24.745 rmmod nvme_fabrics 00:13:25.004 rmmod nvme_keyring 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2121920 ']' 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2121920 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2121920 ']' 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2121920 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121920 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121920' 00:13:25.004 killing process with pid 2121920 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2121920 00:13:25.004 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2121920 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.262 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.163 00:13:27.163 real 0m12.978s 00:13:27.163 user 0m19.856s 00:13:27.163 sys 0m5.142s 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:27.163 ************************************ 00:13:27.163 END TEST nvmf_nvme_cli 00:13:27.163 ************************************ 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.163 15:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.422 ************************************ 00:13:27.422 START TEST nvmf_vfio_user 00:13:27.422 ************************************ 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:27.422 * Looking for test storage... 00:13:27.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.422 --rc genhtml_branch_coverage=1 00:13:27.422 --rc genhtml_function_coverage=1 00:13:27.422 --rc genhtml_legend=1 00:13:27.422 --rc geninfo_all_blocks=1 00:13:27.422 --rc geninfo_unexecuted_blocks=1 00:13:27.422 00:13:27.422 ' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.422 --rc genhtml_branch_coverage=1 00:13:27.422 --rc genhtml_function_coverage=1 00:13:27.422 --rc genhtml_legend=1 00:13:27.422 --rc geninfo_all_blocks=1 00:13:27.422 --rc geninfo_unexecuted_blocks=1 00:13:27.422 00:13:27.422 ' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.422 --rc genhtml_branch_coverage=1 00:13:27.422 --rc genhtml_function_coverage=1 00:13:27.422 --rc genhtml_legend=1 00:13:27.422 --rc geninfo_all_blocks=1 00:13:27.422 --rc geninfo_unexecuted_blocks=1 00:13:27.422 00:13:27.422 ' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.422 --rc genhtml_branch_coverage=1 00:13:27.422 --rc genhtml_function_coverage=1 00:13:27.422 --rc genhtml_legend=1 00:13:27.422 --rc geninfo_all_blocks=1 00:13:27.422 --rc geninfo_unexecuted_blocks=1 00:13:27.422 00:13:27.422 ' 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.422 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2123208 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2123208' 00:13:27.423 Process pid: 2123208 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2123208 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2123208 ']' 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.423 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.683 [2024-11-20 15:22:31.351836] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:27.683 [2024-11-20 15:22:31.351884] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.683 [2024-11-20 15:22:31.425895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.683 [2024-11-20 15:22:31.468912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.683 [2024-11-20 15:22:31.468954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.683 [2024-11-20 15:22:31.468961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.683 [2024-11-20 15:22:31.468968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.683 [2024-11-20 15:22:31.468973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.683 [2024-11-20 15:22:31.470620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.683 [2024-11-20 15:22:31.470658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.683 [2024-11-20 15:22:31.470772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.683 [2024-11-20 15:22:31.470773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.683 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.683 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:27.683 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:29.060 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.319 Malloc1 00:13:29.319 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:29.319 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:29.886 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:29.886 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.886 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:29.886 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.145 Malloc2 00:13:30.145 15:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:30.404 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:30.404 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:30.662 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:30.662 [2024-11-20 15:22:34.528023] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:30.662 [2024-11-20 15:22:34.528071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123746 ] 00:13:30.923 [2024-11-20 15:22:34.570098] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:30.923 [2024-11-20 15:22:34.579302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:30.923 [2024-11-20 15:22:34.579325] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f689b0fa000 00:13:30.923 [2024-11-20 15:22:34.580303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.581305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.582311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.583316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.584321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.585324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.586327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.587335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.923 [2024-11-20 15:22:34.588337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:30.923 [2024-11-20 15:22:34.588347] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f689b0ef000 00:13:30.923 [2024-11-20 15:22:34.589292] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:30.923 [2024-11-20 15:22:34.598907] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:30.923 [2024-11-20 15:22:34.598934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:30.923 [2024-11-20 15:22:34.604422] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:30.923 [2024-11-20 15:22:34.604458] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:30.923 [2024-11-20 15:22:34.604524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:30.923 [2024-11-20 15:22:34.604539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:30.923 [2024-11-20 15:22:34.604544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:30.923 [2024-11-20 15:22:34.605425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:30.923 [2024-11-20 15:22:34.605434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:30.923 [2024-11-20 15:22:34.605440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:30.923 [2024-11-20 15:22:34.606430] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:30.923 [2024-11-20 15:22:34.606437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:30.923 [2024-11-20 15:22:34.606447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:30.923 [2024-11-20 15:22:34.607431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:30.923 [2024-11-20 15:22:34.607440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:30.923 [2024-11-20 15:22:34.608437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:30.923 [2024-11-20 15:22:34.608445] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:30.924 [2024-11-20 15:22:34.608449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:30.924 [2024-11-20 15:22:34.608455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:30.924 [2024-11-20 15:22:34.608563] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:30.924 [2024-11-20 15:22:34.608567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:30.924 [2024-11-20 15:22:34.608572] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:30.924 [2024-11-20 15:22:34.609448] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:30.924 [2024-11-20 15:22:34.610450] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:30.924 [2024-11-20 15:22:34.611462] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:30.924 [2024-11-20 15:22:34.612464] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.924 [2024-11-20 15:22:34.612528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:30.924 [2024-11-20 15:22:34.613473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:30.924 [2024-11-20 15:22:34.613481] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:30.924 [2024-11-20 15:22:34.613485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:30.924 [2024-11-20 15:22:34.613512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613525] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.924 [2024-11-20 15:22:34.613529] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.924 [2024-11-20 15:22:34.613533] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.924 [2024-11-20 15:22:34.613544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613596] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:30.924 [2024-11-20 15:22:34.613601] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:30.924 [2024-11-20 15:22:34.613604] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:30.924 [2024-11-20 15:22:34.613609] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:30.924 [2024-11-20 15:22:34.613615] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:30.924 [2024-11-20 15:22:34.613619] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:30.924 [2024-11-20 15:22:34.613624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.924 [2024-11-20 15:22:34.613670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.924 [2024-11-20 15:22:34.613677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.924 [2024-11-20 15:22:34.613684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.924 [2024-11-20 15:22:34.613689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613721] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:30.924 [2024-11-20 15:22:34.613726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613823] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:30.924 [2024-11-20 15:22:34.613827] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:30.924 [2024-11-20 15:22:34.613830] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.924 [2024-11-20 15:22:34.613836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613858] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:30.924 [2024-11-20 15:22:34.613866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613878] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.924 [2024-11-20 15:22:34.613882] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.924 [2024-11-20 15:22:34.613885] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.924 [2024-11-20 15:22:34.613891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.924 [2024-11-20 15:22:34.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:30.924 [2024-11-20 15:22:34.613918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:30.924 [2024-11-20 15:22:34.613931] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.924 [2024-11-20 15:22:34.613935] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.924 [2024-11-20 15:22:34.613938] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.925 [2024-11-20 15:22:34.613943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.613957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.613964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.613999] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:30.925 [2024-11-20 15:22:34.614003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:30.925 [2024-11-20 15:22:34.614007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:30.925 [2024-11-20 15:22:34.614022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614103] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:30.925 [2024-11-20 15:22:34.614107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:30.925 [2024-11-20 15:22:34.614110] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:30.925 [2024-11-20 15:22:34.614113] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:30.925 [2024-11-20 15:22:34.614116] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:30.925 [2024-11-20 15:22:34.614122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:30.925 [2024-11-20 15:22:34.614129] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:30.925 [2024-11-20 15:22:34.614132] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:30.925 [2024-11-20 15:22:34.614135] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.925 [2024-11-20 15:22:34.614141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:30.925 [2024-11-20 15:22:34.614151] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.925 [2024-11-20 15:22:34.614154] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.925 [2024-11-20 15:22:34.614160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614166] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:30.925 [2024-11-20 15:22:34.614172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:30.925 [2024-11-20 15:22:34.614175] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:30.925 [2024-11-20 15:22:34.614180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:30.925 [2024-11-20 15:22:34.614186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:30.925 [2024-11-20 15:22:34.614214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:30.925 ===================================================== 00:13:30.925 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.925 ===================================================== 00:13:30.925 Controller Capabilities/Features 00:13:30.925 ================================ 00:13:30.925 Vendor ID: 4e58 00:13:30.925 Subsystem Vendor ID: 4e58 00:13:30.925 Serial Number: SPDK1 00:13:30.925 Model Number: SPDK bdev Controller 00:13:30.925 Firmware Version: 25.01 00:13:30.925 Recommended Arb Burst: 6 00:13:30.925 IEEE OUI Identifier: 8d 6b 50 00:13:30.925 Multi-path I/O 00:13:30.925 May have multiple subsystem ports: Yes 00:13:30.925 May have multiple controllers: Yes 00:13:30.925 Associated with SR-IOV VF: No 00:13:30.925 Max Data Transfer Size: 131072 00:13:30.925 Max Number of Namespaces: 32 00:13:30.925 Max Number of I/O Queues: 127 00:13:30.925 NVMe Specification Version (VS): 1.3 00:13:30.925 NVMe Specification Version (Identify): 1.3 00:13:30.925 Maximum Queue Entries: 256 00:13:30.925 Contiguous Queues Required: Yes 00:13:30.925 Arbitration Mechanisms Supported 00:13:30.925 Weighted Round Robin: Not Supported 00:13:30.925 Vendor Specific: Not Supported 00:13:30.925 Reset Timeout: 15000 ms 00:13:30.925 Doorbell Stride: 4 bytes 00:13:30.925 NVM Subsystem Reset: Not Supported 00:13:30.925 Command Sets Supported 00:13:30.925 NVM Command Set: Supported 00:13:30.925 Boot Partition: Not Supported 00:13:30.925 Memory Page Size Minimum: 4096 bytes 00:13:30.925 Memory Page Size Maximum: 4096 bytes 00:13:30.925 Persistent Memory Region: Not Supported 00:13:30.925 Optional Asynchronous Events Supported 00:13:30.925 Namespace Attribute Notices: Supported 00:13:30.925 Firmware Activation Notices: Not Supported 00:13:30.925 ANA Change Notices: Not Supported 00:13:30.925 PLE Aggregate Log Change Notices: Not Supported 00:13:30.925 LBA Status Info Alert Notices: Not Supported 00:13:30.925 EGE Aggregate Log Change Notices: Not Supported 00:13:30.925 Normal NVM Subsystem Shutdown event: Not Supported 00:13:30.925 Zone Descriptor Change Notices: Not Supported 00:13:30.925 Discovery Log Change Notices: Not Supported 00:13:30.925 Controller Attributes 00:13:30.925 128-bit Host Identifier: Supported 00:13:30.925 Non-Operational Permissive Mode: Not Supported 00:13:30.925 NVM Sets: Not Supported 00:13:30.925 Read Recovery Levels: Not Supported 00:13:30.925 Endurance Groups: Not Supported 00:13:30.925 Predictable Latency Mode: Not Supported 00:13:30.925 Traffic Based Keep ALive: Not Supported 00:13:30.925 Namespace Granularity: Not Supported 00:13:30.925 SQ Associations: Not Supported 00:13:30.925 UUID List: Not Supported 00:13:30.925 Multi-Domain Subsystem: Not Supported 00:13:30.925 Fixed Capacity Management: Not Supported 00:13:30.925 Variable Capacity Management: Not Supported 00:13:30.925 Delete Endurance Group: Not Supported 00:13:30.925 Delete NVM Set: Not Supported 00:13:30.925 Extended LBA Formats Supported: Not Supported 00:13:30.926 Flexible Data Placement Supported: Not Supported 00:13:30.926 00:13:30.926 Controller Memory Buffer Support 00:13:30.926 ================================ 00:13:30.926 Supported: No 00:13:30.926 00:13:30.926 Persistent Memory Region Support 00:13:30.926 ================================ 00:13:30.926 Supported: No 00:13:30.926 00:13:30.926 Admin Command Set Attributes 00:13:30.926 ============================ 00:13:30.926 Security Send/Receive: Not Supported 00:13:30.926 Format NVM: Not Supported 00:13:30.926 Firmware Activate/Download: Not Supported 00:13:30.926 Namespace Management: Not Supported 00:13:30.926 Device Self-Test: Not Supported 00:13:30.926 Directives: Not Supported 00:13:30.926 NVMe-MI: Not Supported 00:13:30.926 Virtualization Management: Not Supported 00:13:30.926 Doorbell Buffer Config: Not Supported 00:13:30.926 Get LBA Status Capability: Not Supported 00:13:30.926 Command & Feature Lockdown Capability: Not Supported 00:13:30.926 Abort Command Limit: 4 00:13:30.926 Async Event Request Limit: 4 00:13:30.926 Number of Firmware Slots: N/A 00:13:30.926 Firmware Slot 1 Read-Only: N/A 00:13:30.926 Firmware Activation Without Reset: N/A 00:13:30.926 Multiple Update Detection Support: N/A 00:13:30.926 Firmware Update Granularity: No Information Provided 00:13:30.926 Per-Namespace SMART Log: No 00:13:30.926 Asymmetric Namespace Access Log Page: Not Supported 00:13:30.926 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:30.926 Command Effects Log Page: Supported 00:13:30.926 Get Log Page Extended Data: Supported 00:13:30.926 Telemetry Log Pages: Not Supported 00:13:30.926 Persistent Event Log Pages: Not Supported 00:13:30.926 Supported Log Pages Log Page: May Support 00:13:30.926 Commands Supported & Effects Log Page: Not Supported 00:13:30.926 Feature Identifiers & Effects Log Page:May Support 00:13:30.926 NVMe-MI Commands & Effects Log Page: May Support 00:13:30.926 Data Area 4 for Telemetry Log: Not Supported 00:13:30.926 Error Log Page Entries Supported: 128 00:13:30.926 Keep Alive: Supported 00:13:30.926 Keep Alive Granularity: 10000 ms 00:13:30.926 00:13:30.926 NVM Command Set Attributes 00:13:30.926 ========================== 00:13:30.926 Submission Queue Entry Size 00:13:30.926 Max: 64 00:13:30.926 Min: 64 00:13:30.926 Completion Queue Entry Size 00:13:30.926 Max: 16 00:13:30.926 Min: 16 00:13:30.926 Number of Namespaces: 32 00:13:30.926 Compare Command: Supported 00:13:30.926 Write Uncorrectable Command: Not Supported 00:13:30.926 Dataset Management Command: Supported 00:13:30.926 Write Zeroes Command: Supported 00:13:30.926 Set Features Save Field: Not Supported 00:13:30.926 Reservations: Not Supported 00:13:30.926 Timestamp: Not Supported 00:13:30.926 Copy: Supported 00:13:30.926 Volatile Write Cache: Present 00:13:30.926 Atomic Write Unit (Normal): 1 00:13:30.926 Atomic Write Unit (PFail): 1 00:13:30.926 Atomic Compare & Write Unit: 1 00:13:30.926 Fused Compare & Write: Supported 00:13:30.926 Scatter-Gather List 00:13:30.926 SGL Command Set: Supported (Dword aligned) 00:13:30.926 SGL Keyed: Not Supported 00:13:30.926 SGL Bit Bucket Descriptor: Not Supported 00:13:30.926 SGL Metadata Pointer: Not Supported 00:13:30.926 Oversized SGL: Not Supported 00:13:30.926 SGL Metadata Address: Not Supported 00:13:30.926 SGL Offset: Not Supported 00:13:30.926 Transport SGL Data Block: Not Supported 00:13:30.926 Replay Protected Memory Block: Not Supported 00:13:30.926 00:13:30.926 Firmware Slot Information 00:13:30.926 ========================= 00:13:30.926 Active slot: 1 00:13:30.926 Slot 1 Firmware Revision: 25.01 00:13:30.926 00:13:30.926 00:13:30.926 Commands Supported and Effects 00:13:30.926 ============================== 00:13:30.926 Admin Commands 00:13:30.926 -------------- 00:13:30.926 Get Log Page (02h): Supported 00:13:30.926 Identify (06h): Supported 00:13:30.926 Abort (08h): Supported 00:13:30.926 Set Features (09h): Supported 00:13:30.926 Get Features (0Ah): Supported 00:13:30.926 Asynchronous Event Request (0Ch): Supported 00:13:30.926 Keep Alive (18h): Supported 00:13:30.926 I/O Commands 00:13:30.926 ------------ 00:13:30.926 Flush (00h): Supported LBA-Change 00:13:30.926 Write (01h): Supported LBA-Change 00:13:30.926 Read (02h): Supported 00:13:30.926 Compare (05h): Supported 00:13:30.926 Write Zeroes (08h): Supported LBA-Change 00:13:30.926 Dataset Management (09h): Supported LBA-Change 00:13:30.926 Copy (19h): Supported LBA-Change 00:13:30.926 00:13:30.926 Error Log 00:13:30.926 ========= 00:13:30.926 00:13:30.926 Arbitration 00:13:30.926 =========== 00:13:30.926 Arbitration Burst: 1 00:13:30.926 00:13:30.926 Power Management 00:13:30.926 ================ 00:13:30.926 Number of Power States: 1 00:13:30.926 Current Power State: Power State #0 00:13:30.926 Power State #0: 00:13:30.926 Max Power: 0.00 W 00:13:30.926 Non-Operational State: Operational 00:13:30.926 Entry Latency: Not Reported 00:13:30.926 Exit Latency: Not Reported 00:13:30.926 Relative Read Throughput: 0 00:13:30.926 Relative Read Latency: 0 00:13:30.926 Relative Write Throughput: 0 00:13:30.926 Relative Write Latency: 0 00:13:30.926 Idle Power: Not Reported 00:13:30.926 Active Power: Not Reported 00:13:30.926 Non-Operational Permissive Mode: Not Supported 00:13:30.926 00:13:30.926 Health Information 00:13:30.926 ================== 00:13:30.926 Critical Warnings: 00:13:30.926 Available Spare Space: OK 00:13:30.926 Temperature: OK 00:13:30.926 Device Reliability: OK 00:13:30.926 Read Only: No 00:13:30.926 Volatile Memory Backup: OK 00:13:30.926 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:30.926 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:30.926 Available Spare: 0% 00:13:30.926 Available Sp[2024-11-20 15:22:34.614302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:30.926 [2024-11-20 15:22:34.614309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:30.926 [2024-11-20 15:22:34.614332] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:30.926 [2024-11-20 15:22:34.614341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.926 [2024-11-20 15:22:34.614347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.926 [2024-11-20 15:22:34.614352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.926 [2024-11-20 15:22:34.614357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.926 [2024-11-20 15:22:34.617957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:30.927 [2024-11-20 15:22:34.617968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:30.927 [2024-11-20 15:22:34.618514] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.927 [2024-11-20 15:22:34.618563] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:30.927 [2024-11-20 15:22:34.618569] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:30.927 [2024-11-20 15:22:34.619521] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:30.927 [2024-11-20 15:22:34.619532] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:30.927 [2024-11-20 15:22:34.619580] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:30.927 [2024-11-20 15:22:34.621544] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:30.927 are Threshold: 0% 00:13:30.927 Life Percentage Used: 0% 00:13:30.927 Data Units Read: 0 00:13:30.927 Data Units Written: 0 00:13:30.927 Host Read Commands: 0 00:13:30.927 Host Write Commands: 0 00:13:30.927 Controller Busy Time: 0 minutes 00:13:30.927 Power Cycles: 0 00:13:30.927 Power On Hours: 0 hours 00:13:30.927 Unsafe Shutdowns: 0 00:13:30.927 Unrecoverable Media Errors: 0 00:13:30.927 Lifetime Error Log Entries: 0 00:13:30.927 Warning Temperature Time: 0 minutes 00:13:30.927 Critical Temperature Time: 0 minutes 00:13:30.927 00:13:30.927 Number of Queues 00:13:30.927 ================ 00:13:30.927 Number of I/O Submission Queues: 127 00:13:30.927 Number of I/O Completion Queues: 127 00:13:30.927 00:13:30.927 Active Namespaces 00:13:30.927 ================= 00:13:30.927 Namespace ID:1 00:13:30.927 Error Recovery Timeout: Unlimited 00:13:30.927 Command Set Identifier: NVM (00h) 00:13:30.927 Deallocate: Supported 00:13:30.927 Deallocated/Unwritten Error: Not Supported 00:13:30.927 Deallocated Read Value: Unknown 00:13:30.927 Deallocate in Write Zeroes: Not Supported 00:13:30.927 Deallocated Guard Field: 0xFFFF 00:13:30.927 Flush: Supported 00:13:30.927 Reservation: Supported 00:13:30.927 Namespace Sharing Capabilities: Multiple Controllers 00:13:30.927 Size (in LBAs): 131072 (0GiB) 00:13:30.927 Capacity (in LBAs): 131072 (0GiB) 00:13:30.927 Utilization (in LBAs): 131072 (0GiB) 00:13:30.927 NGUID: 7E201E657B854D528242244AD50A1215 00:13:30.927 UUID: 7e201e65-7b85-4d52-8242-244ad50a1215 00:13:30.927 Thin Provisioning: Not Supported 00:13:30.927 Per-NS Atomic Units: Yes 00:13:30.927 Atomic Boundary Size (Normal): 0 00:13:30.927 Atomic Boundary Size (PFail): 0 00:13:30.927 Atomic Boundary Offset: 0 00:13:30.927 Maximum Single Source Range Length: 65535 00:13:30.927 Maximum Copy Length: 65535 00:13:30.927 Maximum Source Range Count: 1 00:13:30.927 NGUID/EUI64 Never Reused: No 00:13:30.927 Namespace Write Protected: No 00:13:30.927 Number of LBA Formats: 1 00:13:30.927 Current LBA Format: LBA Format #00 00:13:30.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:30.927 00:13:30.927 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:31.186 [2024-11-20 15:22:34.848774] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:36.460 Initializing NVMe Controllers 00:13:36.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:36.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:36.460 Initialization complete. Launching workers. 00:13:36.460 ======================================================== 00:13:36.460 Latency(us) 00:13:36.460 Device Information : IOPS MiB/s Average min max 00:13:36.460 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39956.07 156.08 3203.34 956.88 7600.11 00:13:36.460 ======================================================== 00:13:36.460 Total : 39956.07 156.08 3203.34 956.88 7600.11 00:13:36.460 00:13:36.460 [2024-11-20 15:22:39.866069] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:36.460 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:36.460 [2024-11-20 15:22:40.104200] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:41.733 Initializing NVMe Controllers 00:13:41.733 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:41.733 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:41.733 Initialization complete. Launching workers. 00:13:41.733 ======================================================== 00:13:41.733 Latency(us) 00:13:41.733 Device Information : IOPS MiB/s Average min max 00:13:41.733 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16049.42 62.69 7980.74 5982.14 15463.03 00:13:41.733 ======================================================== 00:13:41.733 Total : 16049.42 62.69 7980.74 5982.14 15463.03 00:13:41.733 00:13:41.733 [2024-11-20 15:22:45.146732] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:41.733 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:41.733 [2024-11-20 15:22:45.359759] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.194 [2024-11-20 15:22:50.428229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.194 Initializing NVMe Controllers 00:13:47.194 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:47.194 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:47.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:47.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:47.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:47.194 Initialization complete. Launching workers. 00:13:47.194 Starting thread on core 2 00:13:47.194 Starting thread on core 3 00:13:47.194 Starting thread on core 1 00:13:47.194 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:47.194 [2024-11-20 15:22:50.726175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.483 [2024-11-20 15:22:53.911156] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.483 Initializing NVMe Controllers 00:13:50.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.483 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:50.483 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:50.483 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:50.483 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:50.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:50.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:50.483 Initialization complete. Launching workers. 00:13:50.483 Starting thread on core 1 with urgent priority queue 00:13:50.483 Starting thread on core 2 with urgent priority queue 00:13:50.483 Starting thread on core 3 with urgent priority queue 00:13:50.483 Starting thread on core 0 with urgent priority queue 00:13:50.483 SPDK bdev Controller (SPDK1 ) core 0: 6765.00 IO/s 14.78 secs/100000 ios 00:13:50.483 SPDK bdev Controller (SPDK1 ) core 1: 6911.33 IO/s 14.47 secs/100000 ios 00:13:50.483 SPDK bdev Controller (SPDK1 ) core 2: 7202.67 IO/s 13.88 secs/100000 ios 00:13:50.483 SPDK bdev Controller (SPDK1 ) core 3: 7676.67 IO/s 13.03 secs/100000 ios 00:13:50.483 ======================================================== 00:13:50.483 00:13:50.483 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:50.483 [2024-11-20 15:22:54.199221] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.483 Initializing NVMe Controllers 00:13:50.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.483 Namespace ID: 1 size: 0GB 00:13:50.483 Initialization complete. 00:13:50.483 INFO: using host memory buffer for IO 00:13:50.484 Hello world! 00:13:50.484 [2024-11-20 15:22:54.235439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.484 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:50.743 [2024-11-20 15:22:54.516148] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.681 Initializing NVMe Controllers 00:13:51.681 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.681 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.681 Initialization complete. Launching workers. 00:13:51.681 submit (in ns) avg, min, max = 4717.0, 3286.1, 3999760.9 00:13:51.681 complete (in ns) avg, min, max = 21211.8, 1779.1, 4992812.2 00:13:51.681 00:13:51.681 Submit histogram 00:13:51.681 ================ 00:13:51.681 Range in us Cumulative Count 00:13:51.681 3.283 - 3.297: 0.0365% ( 6) 00:13:51.681 3.297 - 3.311: 0.0913% ( 9) 00:13:51.681 3.311 - 3.325: 0.2191% ( 21) 00:13:51.681 3.325 - 3.339: 0.6269% ( 67) 00:13:51.681 3.339 - 3.353: 2.8912% ( 372) 00:13:51.681 3.353 - 3.367: 7.4746% ( 753) 00:13:51.681 3.367 - 3.381: 13.0379% ( 914) 00:13:51.681 3.381 - 3.395: 19.3378% ( 1035) 00:13:51.681 3.395 - 3.409: 25.9237% ( 1082) 00:13:51.681 3.409 - 3.423: 31.6879% ( 947) 00:13:51.681 3.423 - 3.437: 36.6425% ( 814) 00:13:51.681 3.437 - 3.450: 41.7067% ( 832) 00:13:51.681 3.450 - 3.464: 46.4605% ( 781) 00:13:51.681 3.464 - 3.478: 50.8795% ( 726) 00:13:51.681 3.478 - 3.492: 55.9133% ( 827) 00:13:51.681 3.492 - 3.506: 62.9801% ( 1161) 00:13:51.681 3.506 - 3.520: 68.7321% ( 945) 00:13:51.681 3.520 - 3.534: 73.0964% ( 717) 00:13:51.681 3.534 - 3.548: 78.1606% ( 832) 00:13:51.681 3.548 - 3.562: 82.3118% ( 682) 00:13:51.681 3.562 - 3.590: 86.5299% ( 693) 00:13:51.681 3.590 - 3.617: 87.6012% ( 176) 00:13:51.681 3.617 - 3.645: 88.3864% ( 129) 00:13:51.681 3.645 - 3.673: 89.7194% ( 219) 00:13:51.681 3.673 - 3.701: 91.5515% ( 301) 00:13:51.681 3.701 - 3.729: 93.2741% ( 283) 00:13:51.681 3.729 - 3.757: 95.0453% ( 291) 00:13:51.681 3.757 - 3.784: 96.5853% ( 253) 00:13:51.681 3.784 - 3.812: 97.9487% ( 224) 00:13:51.681 3.812 - 3.840: 98.7583% ( 133) 00:13:51.681 3.840 - 3.868: 99.2087% ( 74) 00:13:51.681 3.868 - 3.896: 99.5252% ( 52) 00:13:51.681 3.896 - 3.923: 99.6591% ( 22) 00:13:51.681 3.923 - 3.951: 99.7017% ( 7) 00:13:51.681 3.951 - 3.979: 99.7200% ( 3) 00:13:51.681 5.231 - 5.259: 99.7261% ( 1) 00:13:51.681 5.287 - 5.315: 99.7322% ( 1) 00:13:51.682 5.315 - 5.343: 99.7383% ( 1) 00:13:51.682 5.370 - 5.398: 99.7444% ( 1) 00:13:51.682 5.398 - 5.426: 99.7504% ( 1) 00:13:51.682 5.454 - 5.482: 99.7565% ( 1) 00:13:51.682 5.482 - 5.510: 99.7687% ( 2) 00:13:51.682 5.537 - 5.565: 99.7809% ( 2) 00:13:51.682 5.593 - 5.621: 99.7870% ( 1) 00:13:51.682 5.649 - 5.677: 99.7930% ( 1) 00:13:51.682 5.871 - 5.899: 99.7991% ( 1) 00:13:51.682 5.899 - 5.927: 99.8052% ( 1) 00:13:51.682 5.927 - 5.955: 99.8113% ( 1) 00:13:51.682 5.955 - 5.983: 99.8235% ( 2) 00:13:51.682 5.983 - 6.010: 99.8296% ( 1) 00:13:51.682 6.344 - 6.372: 99.8357% ( 1) 00:13:51.682 6.734 - 6.762: 99.8417% ( 1) 00:13:51.682 6.790 - 6.817: 99.8478% ( 1) 00:13:51.682 6.817 - 6.845: 99.8539% ( 1) 00:13:51.682 6.845 - 6.873: 99.8661% ( 2) 00:13:51.682 6.873 - 6.901: 99.8722% ( 1) 00:13:51.682 7.040 - 7.068: 99.8783% ( 1) 00:13:51.682 7.068 - 7.096: 99.8844% ( 1) 00:13:51.682 7.096 - 7.123: 99.8904% ( 1) 00:13:51.682 7.513 - 7.569: 99.8965% ( 1) 00:13:51.682 7.736 - 7.791: 99.9026% ( 1) 00:13:51.682 7.791 - 7.847: 99.9087% ( 1) 00:13:51.682 8.014 - 8.070: 99.9209% ( 2) 00:13:51.682 8.237 - 8.292: 99.9270% ( 1) 00:13:51.682 8.348 - 8.403: 99.9330% ( 1) 00:13:51.682 9.350 - 9.405: 99.9391% ( 1) 00:13:51.682 10.407 - 10.463: 99.9452% ( 1) 00:13:51.682 10.963 - 11.019: 99.9513% ( 1) 00:13:51.682 11.297 - 11.353: 99.9574% ( 1) 00:13:51.682 11.631 - 11.687: 99.9635% ( 1) 00:13:51.682 14.692 - 14.803: 99.9696% ( 1) 00:13:51.682 3989.148 - 4017.642: 100.0000% ( 5) 00:13:51.682 00:13:51.682 Complete histogram 00:13:51.682 ================== 00:13:51.682 Range in us Cumulative Count 00:13:51.682 1.774 - 1.781: 0.0243% ( 4) 00:13:51.682 1.781 - 1.795: 0.0609% ( 6) 00:13:51.682 1.795 - 1.809: 0.0852% ( 4) 00:13:51.682 1.809 - 1.823: 0.2739% ( 31) 00:13:51.682 1.823 - 1.837: 14.7483% ( 2378) 00:13:51.682 1.837 - [2024-11-20 15:22:55.538127] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.682 1.850: 42.1389% ( 4500) 00:13:51.682 1.850 - 1.864: 47.4405% ( 871) 00:13:51.682 1.864 - 1.878: 50.8248% ( 556) 00:13:51.682 1.878 - 1.892: 72.5120% ( 3563) 00:13:51.682 1.892 - 1.906: 90.0359% ( 2879) 00:13:51.682 1.906 - 1.920: 95.3679% ( 876) 00:13:51.682 1.920 - 1.934: 97.2670% ( 312) 00:13:51.682 1.934 - 1.948: 97.6140% ( 57) 00:13:51.682 1.948 - 1.962: 98.2227% ( 100) 00:13:51.682 1.962 - 1.976: 98.9226% ( 115) 00:13:51.682 1.976 - 1.990: 99.1844% ( 43) 00:13:51.682 1.990 - 2.003: 99.2635% ( 13) 00:13:51.682 2.003 - 2.017: 99.2757% ( 2) 00:13:51.682 2.017 - 2.031: 99.2878% ( 2) 00:13:51.682 2.031 - 2.045: 99.3061% ( 3) 00:13:51.682 2.226 - 2.240: 99.3122% ( 1) 00:13:51.682 2.240 - 2.254: 99.3183% ( 1) 00:13:51.682 2.282 - 2.296: 99.3244% ( 1) 00:13:51.682 3.423 - 3.437: 99.3305% ( 1) 00:13:51.682 3.757 - 3.784: 99.3426% ( 2) 00:13:51.682 3.812 - 3.840: 99.3487% ( 1) 00:13:51.682 3.840 - 3.868: 99.3609% ( 2) 00:13:51.682 3.951 - 3.979: 99.3670% ( 1) 00:13:51.682 4.424 - 4.452: 99.3731% ( 1) 00:13:51.682 4.814 - 4.842: 99.3791% ( 1) 00:13:51.682 5.120 - 5.148: 99.3852% ( 1) 00:13:51.682 5.148 - 5.176: 99.3913% ( 1) 00:13:51.682 5.176 - 5.203: 99.3974% ( 1) 00:13:51.682 5.231 - 5.259: 99.4035% ( 1) 00:13:51.682 5.287 - 5.315: 99.4096% ( 1) 00:13:51.682 5.315 - 5.343: 99.4157% ( 1) 00:13:51.682 5.454 - 5.482: 99.4218% ( 1) 00:13:51.682 5.537 - 5.565: 99.4278% ( 1) 00:13:51.682 5.677 - 5.704: 99.4339% ( 1) 00:13:51.682 5.899 - 5.927: 99.4400% ( 1) 00:13:51.682 5.927 - 5.955: 99.4461% ( 1) 00:13:51.682 6.177 - 6.205: 99.4522% ( 1) 00:13:51.682 6.317 - 6.344: 99.4583% ( 1) 00:13:51.682 6.483 - 6.511: 99.4704% ( 2) 00:13:51.682 6.539 - 6.567: 99.4765% ( 1) 00:13:51.682 6.734 - 6.762: 99.4826% ( 1) 00:13:51.682 6.901 - 6.929: 99.4887% ( 1) 00:13:51.682 7.096 - 7.123: 99.4948% ( 1) 00:13:51.682 7.290 - 7.346: 99.5009% ( 1) 00:13:51.682 9.906 - 9.962: 99.5070% ( 1) 00:13:51.682 14.915 - 15.026: 99.5131% ( 1) 00:13:51.682 2008.821 - 2023.068: 99.5191% ( 1) 00:13:51.682 3020.355 - 3034.602: 99.5252% ( 1) 00:13:51.682 3048.849 - 3063.096: 99.5313% ( 1) 00:13:51.682 3989.148 - 4017.642: 99.9878% ( 75) 00:13:51.682 4986.435 - 5014.929: 100.0000% ( 2) 00:13:51.682 00:13:51.682 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:51.682 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:51.682 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:51.682 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:51.682 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:51.941 [ 00:13:51.941 { 00:13:51.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:51.941 "subtype": "Discovery", 00:13:51.941 "listen_addresses": [], 00:13:51.941 "allow_any_host": true, 00:13:51.941 "hosts": [] 00:13:51.942 }, 00:13:51.942 { 00:13:51.942 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:51.942 "subtype": "NVMe", 00:13:51.942 "listen_addresses": [ 00:13:51.942 { 00:13:51.942 "trtype": "VFIOUSER", 00:13:51.942 "adrfam": "IPv4", 00:13:51.942 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:51.942 "trsvcid": "0" 00:13:51.942 } 00:13:51.942 ], 00:13:51.942 "allow_any_host": true, 00:13:51.942 "hosts": [], 00:13:51.942 "serial_number": "SPDK1", 00:13:51.942 "model_number": "SPDK bdev Controller", 00:13:51.942 "max_namespaces": 32, 00:13:51.942 "min_cntlid": 1, 00:13:51.942 "max_cntlid": 65519, 00:13:51.942 "namespaces": [ 00:13:51.942 { 00:13:51.942 "nsid": 1, 00:13:51.942 "bdev_name": "Malloc1", 00:13:51.942 "name": "Malloc1", 00:13:51.942 "nguid": "7E201E657B854D528242244AD50A1215", 00:13:51.942 "uuid": "7e201e65-7b85-4d52-8242-244ad50a1215" 00:13:51.942 } 00:13:51.942 ] 00:13:51.942 }, 00:13:51.942 { 00:13:51.942 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:51.942 "subtype": "NVMe", 00:13:51.942 "listen_addresses": [ 00:13:51.942 { 00:13:51.942 "trtype": "VFIOUSER", 00:13:51.942 "adrfam": "IPv4", 00:13:51.942 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:51.942 "trsvcid": "0" 00:13:51.942 } 00:13:51.942 ], 00:13:51.942 "allow_any_host": true, 00:13:51.942 "hosts": [], 00:13:51.942 "serial_number": "SPDK2", 00:13:51.942 "model_number": "SPDK bdev Controller", 00:13:51.942 "max_namespaces": 32, 00:13:51.942 "min_cntlid": 1, 00:13:51.942 "max_cntlid": 65519, 00:13:51.942 "namespaces": [ 00:13:51.942 { 00:13:51.942 "nsid": 1, 00:13:51.942 "bdev_name": "Malloc2", 00:13:51.942 "name": "Malloc2", 00:13:51.942 "nguid": "2C31DA8CD5BD419D93FF32405C2B08AE", 00:13:51.942 "uuid": "2c31da8c-d5bd-419d-93ff-32405c2b08ae" 00:13:51.942 } 00:13:51.942 ] 00:13:51.942 } 00:13:51.942 ] 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2127373 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:51.942 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:52.201 [2024-11-20 15:22:55.949832] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:52.201 Malloc3 00:13:52.201 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:52.460 [2024-11-20 15:22:56.175621] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:52.460 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:52.460 Asynchronous Event Request test 00:13:52.460 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:52.460 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:52.460 Registering asynchronous event callbacks... 00:13:52.460 Starting namespace attribute notice tests for all controllers... 00:13:52.460 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:52.460 aer_cb - Changed Namespace 00:13:52.460 Cleaning up... 00:13:52.721 [ 00:13:52.721 { 00:13:52.721 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:52.721 "subtype": "Discovery", 00:13:52.721 "listen_addresses": [], 00:13:52.721 "allow_any_host": true, 00:13:52.721 "hosts": [] 00:13:52.721 }, 00:13:52.721 { 00:13:52.721 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:52.721 "subtype": "NVMe", 00:13:52.721 "listen_addresses": [ 00:13:52.721 { 00:13:52.721 "trtype": "VFIOUSER", 00:13:52.721 "adrfam": "IPv4", 00:13:52.721 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:52.721 "trsvcid": "0" 00:13:52.721 } 00:13:52.721 ], 00:13:52.721 "allow_any_host": true, 00:13:52.721 "hosts": [], 00:13:52.721 "serial_number": "SPDK1", 00:13:52.721 "model_number": "SPDK bdev Controller", 00:13:52.721 "max_namespaces": 32, 00:13:52.721 "min_cntlid": 1, 00:13:52.721 "max_cntlid": 65519, 00:13:52.721 "namespaces": [ 00:13:52.721 { 00:13:52.721 "nsid": 1, 00:13:52.721 "bdev_name": "Malloc1", 00:13:52.721 "name": "Malloc1", 00:13:52.721 "nguid": "7E201E657B854D528242244AD50A1215", 00:13:52.721 "uuid": "7e201e65-7b85-4d52-8242-244ad50a1215" 00:13:52.721 }, 00:13:52.721 { 00:13:52.721 "nsid": 2, 00:13:52.721 "bdev_name": "Malloc3", 00:13:52.721 "name": "Malloc3", 00:13:52.721 "nguid": "DC8A827839C54F6A9585EF1B6D04A9D9", 00:13:52.721 "uuid": "dc8a8278-39c5-4f6a-9585-ef1b6d04a9d9" 00:13:52.721 } 00:13:52.721 ] 00:13:52.721 }, 00:13:52.722 { 00:13:52.722 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:52.722 "subtype": "NVMe", 00:13:52.722 "listen_addresses": [ 00:13:52.722 { 00:13:52.722 "trtype": "VFIOUSER", 00:13:52.722 "adrfam": "IPv4", 00:13:52.722 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:52.722 "trsvcid": "0" 00:13:52.722 } 00:13:52.722 ], 00:13:52.722 "allow_any_host": true, 00:13:52.722 "hosts": [], 00:13:52.722 "serial_number": "SPDK2", 00:13:52.722 "model_number": "SPDK bdev Controller", 00:13:52.722 "max_namespaces": 32, 00:13:52.722 "min_cntlid": 1, 00:13:52.722 "max_cntlid": 65519, 00:13:52.722 "namespaces": [ 00:13:52.722 { 00:13:52.722 "nsid": 1, 00:13:52.722 "bdev_name": "Malloc2", 00:13:52.722 "name": "Malloc2", 00:13:52.722 "nguid": "2C31DA8CD5BD419D93FF32405C2B08AE", 00:13:52.722 "uuid": "2c31da8c-d5bd-419d-93ff-32405c2b08ae" 00:13:52.722 } 00:13:52.722 ] 00:13:52.722 } 00:13:52.722 ] 00:13:52.722 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2127373 00:13:52.722 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:52.722 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:52.722 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:52.722 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:52.722 [2024-11-20 15:22:56.428966] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:13:52.722 [2024-11-20 15:22:56.429011] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127382 ] 00:13:52.722 [2024-11-20 15:22:56.468771] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:52.722 [2024-11-20 15:22:56.473035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:52.722 [2024-11-20 15:22:56.473059] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff8f6c21000 00:13:52.722 [2024-11-20 15:22:56.474038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.475047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.476049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.477055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.478070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.479078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.480084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.481089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:52.722 [2024-11-20 15:22:56.482106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:52.722 [2024-11-20 15:22:56.482116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff8f6c16000 00:13:52.722 [2024-11-20 15:22:56.483059] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:52.722 [2024-11-20 15:22:56.497405] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:52.722 [2024-11-20 15:22:56.497433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:52.722 [2024-11-20 15:22:56.499485] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:52.722 [2024-11-20 15:22:56.499522] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:52.722 [2024-11-20 15:22:56.499590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:52.722 [2024-11-20 15:22:56.499603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:52.722 [2024-11-20 15:22:56.499608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:52.722 [2024-11-20 15:22:56.500485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:52.722 [2024-11-20 15:22:56.500495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:52.722 [2024-11-20 15:22:56.500501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:52.722 [2024-11-20 15:22:56.501485] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:52.722 [2024-11-20 15:22:56.501494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:52.722 [2024-11-20 15:22:56.501501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.502495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:52.722 [2024-11-20 15:22:56.502504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.503499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:52.722 [2024-11-20 15:22:56.503507] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:52.722 [2024-11-20 15:22:56.503512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.503518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.503626] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:52.722 [2024-11-20 15:22:56.503630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.503635] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:52.722 [2024-11-20 15:22:56.504505] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:52.722 [2024-11-20 15:22:56.505508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:52.722 [2024-11-20 15:22:56.506515] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:52.722 [2024-11-20 15:22:56.507515] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:52.722 [2024-11-20 15:22:56.507551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:52.722 [2024-11-20 15:22:56.508524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:52.722 [2024-11-20 15:22:56.508532] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:52.722 [2024-11-20 15:22:56.508537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:52.722 [2024-11-20 15:22:56.508554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:52.722 [2024-11-20 15:22:56.508561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:52.722 [2024-11-20 15:22:56.508572] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:52.722 [2024-11-20 15:22:56.508576] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:52.722 [2024-11-20 15:22:56.508579] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.722 [2024-11-20 15:22:56.508591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:52.722 [2024-11-20 15:22:56.511956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:52.722 [2024-11-20 15:22:56.511968] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:52.722 [2024-11-20 15:22:56.511972] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:52.722 [2024-11-20 15:22:56.511976] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:52.722 [2024-11-20 15:22:56.511980] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:52.722 [2024-11-20 15:22:56.511987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:52.722 [2024-11-20 15:22:56.511991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:52.722 [2024-11-20 15:22:56.511996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:52.722 [2024-11-20 15:22:56.512004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:52.722 [2024-11-20 15:22:56.512013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:52.722 [2024-11-20 15:22:56.518953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.518967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.723 [2024-11-20 15:22:56.518975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.723 [2024-11-20 15:22:56.518982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.723 [2024-11-20 15:22:56.518990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.723 [2024-11-20 15:22:56.518994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.519000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.519008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.526953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.526964] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:52.723 [2024-11-20 15:22:56.526969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.526975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.526980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.526988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.534952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.535007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.535015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.535022] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:52.723 [2024-11-20 15:22:56.535026] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:52.723 [2024-11-20 15:22:56.535029] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.535035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.542954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.542968] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:52.723 [2024-11-20 15:22:56.542977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.542984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.542990] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:52.723 [2024-11-20 15:22:56.542996] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:52.723 [2024-11-20 15:22:56.542999] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.543005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.550954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.550978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.550986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.550992] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:52.723 [2024-11-20 15:22:56.550996] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:52.723 [2024-11-20 15:22:56.551000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.551005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.558953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.558963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.558996] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:52.723 [2024-11-20 15:22:56.559000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:52.723 [2024-11-20 15:22:56.559005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:52.723 [2024-11-20 15:22:56.559021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.566965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.574951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.574964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.582952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.582967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.590970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:52.723 [2024-11-20 15:22:56.590975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:52.723 [2024-11-20 15:22:56.590978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:52.723 [2024-11-20 15:22:56.590981] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:52.723 [2024-11-20 15:22:56.590984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:52.723 [2024-11-20 15:22:56.590990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:52.723 [2024-11-20 15:22:56.590997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:52.723 [2024-11-20 15:22:56.591001] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:52.723 [2024-11-20 15:22:56.591004] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.591009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.591015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:52.723 [2024-11-20 15:22:56.591019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:52.723 [2024-11-20 15:22:56.591022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.591027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.591034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:52.723 [2024-11-20 15:22:56.591038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:52.723 [2024-11-20 15:22:56.591041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:52.723 [2024-11-20 15:22:56.591047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:52.723 [2024-11-20 15:22:56.598954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.598970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:52.723 [2024-11-20 15:22:56.598985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:52.723 ===================================================== 00:13:52.723 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.723 ===================================================== 00:13:52.723 Controller Capabilities/Features 00:13:52.723 ================================ 00:13:52.723 Vendor ID: 4e58 00:13:52.723 Subsystem Vendor ID: 4e58 00:13:52.723 Serial Number: SPDK2 00:13:52.723 Model Number: SPDK bdev Controller 00:13:52.723 Firmware Version: 25.01 00:13:52.723 Recommended Arb Burst: 6 00:13:52.723 IEEE OUI Identifier: 8d 6b 50 00:13:52.723 Multi-path I/O 00:13:52.723 May have multiple subsystem ports: Yes 00:13:52.723 May have multiple controllers: Yes 00:13:52.723 Associated with SR-IOV VF: No 00:13:52.723 Max Data Transfer Size: 131072 00:13:52.724 Max Number of Namespaces: 32 00:13:52.724 Max Number of I/O Queues: 127 00:13:52.724 NVMe Specification Version (VS): 1.3 00:13:52.724 NVMe Specification Version (Identify): 1.3 00:13:52.724 Maximum Queue Entries: 256 00:13:52.724 Contiguous Queues Required: Yes 00:13:52.724 Arbitration Mechanisms Supported 00:13:52.724 Weighted Round Robin: Not Supported 00:13:52.724 Vendor Specific: Not Supported 00:13:52.724 Reset Timeout: 15000 ms 00:13:52.724 Doorbell Stride: 4 bytes 00:13:52.724 NVM Subsystem Reset: Not Supported 00:13:52.724 Command Sets Supported 00:13:52.724 NVM Command Set: Supported 00:13:52.724 Boot Partition: Not Supported 00:13:52.724 Memory Page Size Minimum: 4096 bytes 00:13:52.724 Memory Page Size Maximum: 4096 bytes 00:13:52.724 Persistent Memory Region: Not Supported 00:13:52.724 Optional Asynchronous Events Supported 00:13:52.724 Namespace Attribute Notices: Supported 00:13:52.724 Firmware Activation Notices: Not Supported 00:13:52.724 ANA Change Notices: Not Supported 00:13:52.724 PLE Aggregate Log Change Notices: Not Supported 00:13:52.724 LBA Status Info Alert Notices: Not Supported 00:13:52.724 EGE Aggregate Log Change Notices: Not Supported 00:13:52.724 Normal NVM Subsystem Shutdown event: Not Supported 00:13:52.724 Zone Descriptor Change Notices: Not Supported 00:13:52.724 Discovery Log Change Notices: Not Supported 00:13:52.724 Controller Attributes 00:13:52.724 128-bit Host Identifier: Supported 00:13:52.724 Non-Operational Permissive Mode: Not Supported 00:13:52.724 NVM Sets: Not Supported 00:13:52.724 Read Recovery Levels: Not Supported 00:13:52.724 Endurance Groups: Not Supported 00:13:52.724 Predictable Latency Mode: Not Supported 00:13:52.724 Traffic Based Keep ALive: Not Supported 00:13:52.724 Namespace Granularity: Not Supported 00:13:52.724 SQ Associations: Not Supported 00:13:52.724 UUID List: Not Supported 00:13:52.724 Multi-Domain Subsystem: Not Supported 00:13:52.724 Fixed Capacity Management: Not Supported 00:13:52.724 Variable Capacity Management: Not Supported 00:13:52.724 Delete Endurance Group: Not Supported 00:13:52.724 Delete NVM Set: Not Supported 00:13:52.724 Extended LBA Formats Supported: Not Supported 00:13:52.724 Flexible Data Placement Supported: Not Supported 00:13:52.724 00:13:52.724 Controller Memory Buffer Support 00:13:52.724 ================================ 00:13:52.724 Supported: No 00:13:52.724 00:13:52.724 Persistent Memory Region Support 00:13:52.724 ================================ 00:13:52.724 Supported: No 00:13:52.724 00:13:52.724 Admin Command Set Attributes 00:13:52.724 ============================ 00:13:52.724 Security Send/Receive: Not Supported 00:13:52.724 Format NVM: Not Supported 00:13:52.724 Firmware Activate/Download: Not Supported 00:13:52.724 Namespace Management: Not Supported 00:13:52.724 Device Self-Test: Not Supported 00:13:52.724 Directives: Not Supported 00:13:52.724 NVMe-MI: Not Supported 00:13:52.724 Virtualization Management: Not Supported 00:13:52.724 Doorbell Buffer Config: Not Supported 00:13:52.724 Get LBA Status Capability: Not Supported 00:13:52.724 Command & Feature Lockdown Capability: Not Supported 00:13:52.724 Abort Command Limit: 4 00:13:52.724 Async Event Request Limit: 4 00:13:52.724 Number of Firmware Slots: N/A 00:13:52.724 Firmware Slot 1 Read-Only: N/A 00:13:52.724 Firmware Activation Without Reset: N/A 00:13:52.724 Multiple Update Detection Support: N/A 00:13:52.724 Firmware Update Granularity: No Information Provided 00:13:52.724 Per-Namespace SMART Log: No 00:13:52.724 Asymmetric Namespace Access Log Page: Not Supported 00:13:52.724 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:52.724 Command Effects Log Page: Supported 00:13:52.724 Get Log Page Extended Data: Supported 00:13:52.724 Telemetry Log Pages: Not Supported 00:13:52.724 Persistent Event Log Pages: Not Supported 00:13:52.724 Supported Log Pages Log Page: May Support 00:13:52.724 Commands Supported & Effects Log Page: Not Supported 00:13:52.724 Feature Identifiers & Effects Log Page:May Support 00:13:52.724 NVMe-MI Commands & Effects Log Page: May Support 00:13:52.724 Data Area 4 for Telemetry Log: Not Supported 00:13:52.724 Error Log Page Entries Supported: 128 00:13:52.724 Keep Alive: Supported 00:13:52.724 Keep Alive Granularity: 10000 ms 00:13:52.724 00:13:52.724 NVM Command Set Attributes 00:13:52.724 ========================== 00:13:52.724 Submission Queue Entry Size 00:13:52.724 Max: 64 00:13:52.724 Min: 64 00:13:52.724 Completion Queue Entry Size 00:13:52.724 Max: 16 00:13:52.724 Min: 16 00:13:52.724 Number of Namespaces: 32 00:13:52.724 Compare Command: Supported 00:13:52.724 Write Uncorrectable Command: Not Supported 00:13:52.724 Dataset Management Command: Supported 00:13:52.724 Write Zeroes Command: Supported 00:13:52.724 Set Features Save Field: Not Supported 00:13:52.724 Reservations: Not Supported 00:13:52.724 Timestamp: Not Supported 00:13:52.724 Copy: Supported 00:13:52.724 Volatile Write Cache: Present 00:13:52.724 Atomic Write Unit (Normal): 1 00:13:52.724 Atomic Write Unit (PFail): 1 00:13:52.724 Atomic Compare & Write Unit: 1 00:13:52.724 Fused Compare & Write: Supported 00:13:52.724 Scatter-Gather List 00:13:52.724 SGL Command Set: Supported (Dword aligned) 00:13:52.724 SGL Keyed: Not Supported 00:13:52.724 SGL Bit Bucket Descriptor: Not Supported 00:13:52.724 SGL Metadata Pointer: Not Supported 00:13:52.724 Oversized SGL: Not Supported 00:13:52.724 SGL Metadata Address: Not Supported 00:13:52.724 SGL Offset: Not Supported 00:13:52.724 Transport SGL Data Block: Not Supported 00:13:52.724 Replay Protected Memory Block: Not Supported 00:13:52.724 00:13:52.724 Firmware Slot Information 00:13:52.724 ========================= 00:13:52.724 Active slot: 1 00:13:52.724 Slot 1 Firmware Revision: 25.01 00:13:52.724 00:13:52.724 00:13:52.724 Commands Supported and Effects 00:13:52.724 ============================== 00:13:52.724 Admin Commands 00:13:52.724 -------------- 00:13:52.724 Get Log Page (02h): Supported 00:13:52.724 Identify (06h): Supported 00:13:52.724 Abort (08h): Supported 00:13:52.724 Set Features (09h): Supported 00:13:52.724 Get Features (0Ah): Supported 00:13:52.724 Asynchronous Event Request (0Ch): Supported 00:13:52.724 Keep Alive (18h): Supported 00:13:52.724 I/O Commands 00:13:52.724 ------------ 00:13:52.724 Flush (00h): Supported LBA-Change 00:13:52.724 Write (01h): Supported LBA-Change 00:13:52.724 Read (02h): Supported 00:13:52.724 Compare (05h): Supported 00:13:52.724 Write Zeroes (08h): Supported LBA-Change 00:13:52.724 Dataset Management (09h): Supported LBA-Change 00:13:52.724 Copy (19h): Supported LBA-Change 00:13:52.724 00:13:52.724 Error Log 00:13:52.724 ========= 00:13:52.724 00:13:52.724 Arbitration 00:13:52.724 =========== 00:13:52.724 Arbitration Burst: 1 00:13:52.724 00:13:52.724 Power Management 00:13:52.724 ================ 00:13:52.724 Number of Power States: 1 00:13:52.724 Current Power State: Power State #0 00:13:52.724 Power State #0: 00:13:52.724 Max Power: 0.00 W 00:13:52.724 Non-Operational State: Operational 00:13:52.724 Entry Latency: Not Reported 00:13:52.724 Exit Latency: Not Reported 00:13:52.724 Relative Read Throughput: 0 00:13:52.724 Relative Read Latency: 0 00:13:52.724 Relative Write Throughput: 0 00:13:52.724 Relative Write Latency: 0 00:13:52.724 Idle Power: Not Reported 00:13:52.724 Active Power: Not Reported 00:13:52.724 Non-Operational Permissive Mode: Not Supported 00:13:52.724 00:13:52.724 Health Information 00:13:52.724 ================== 00:13:52.724 Critical Warnings: 00:13:52.724 Available Spare Space: OK 00:13:52.724 Temperature: OK 00:13:52.724 Device Reliability: OK 00:13:52.724 Read Only: No 00:13:52.724 Volatile Memory Backup: OK 00:13:52.724 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:52.724 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:52.724 Available Spare: 0% 00:13:52.724 Available Sp[2024-11-20 15:22:56.599076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:52.724 [2024-11-20 15:22:56.606954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:52.724 [2024-11-20 15:22:56.606985] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:52.724 [2024-11-20 15:22:56.606994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.724 [2024-11-20 15:22:56.607002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.724 [2024-11-20 15:22:56.607008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.724 [2024-11-20 15:22:56.607013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.725 [2024-11-20 15:22:56.607056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:52.725 [2024-11-20 15:22:56.607066] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:52.725 [2024-11-20 15:22:56.608059] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:52.725 [2024-11-20 15:22:56.608102] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:52.725 [2024-11-20 15:22:56.608109] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:52.725 [2024-11-20 15:22:56.609058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:52.725 [2024-11-20 15:22:56.609073] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:52.725 [2024-11-20 15:22:56.609121] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:52.725 [2024-11-20 15:22:56.610105] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:52.984 are Threshold: 0% 00:13:52.984 Life Percentage Used: 0% 00:13:52.984 Data Units Read: 0 00:13:52.984 Data Units Written: 0 00:13:52.984 Host Read Commands: 0 00:13:52.984 Host Write Commands: 0 00:13:52.984 Controller Busy Time: 0 minutes 00:13:52.984 Power Cycles: 0 00:13:52.984 Power On Hours: 0 hours 00:13:52.984 Unsafe Shutdowns: 0 00:13:52.984 Unrecoverable Media Errors: 0 00:13:52.984 Lifetime Error Log Entries: 0 00:13:52.984 Warning Temperature Time: 0 minutes 00:13:52.984 Critical Temperature Time: 0 minutes 00:13:52.984 00:13:52.984 Number of Queues 00:13:52.984 ================ 00:13:52.984 Number of I/O Submission Queues: 127 00:13:52.984 Number of I/O Completion Queues: 127 00:13:52.984 00:13:52.984 Active Namespaces 00:13:52.984 ================= 00:13:52.984 Namespace ID:1 00:13:52.984 Error Recovery Timeout: Unlimited 00:13:52.984 Command Set Identifier: NVM (00h) 00:13:52.984 Deallocate: Supported 00:13:52.984 Deallocated/Unwritten Error: Not Supported 00:13:52.984 Deallocated Read Value: Unknown 00:13:52.984 Deallocate in Write Zeroes: Not Supported 00:13:52.984 Deallocated Guard Field: 0xFFFF 00:13:52.984 Flush: Supported 00:13:52.984 Reservation: Supported 00:13:52.984 Namespace Sharing Capabilities: Multiple Controllers 00:13:52.984 Size (in LBAs): 131072 (0GiB) 00:13:52.984 Capacity (in LBAs): 131072 (0GiB) 00:13:52.984 Utilization (in LBAs): 131072 (0GiB) 00:13:52.984 NGUID: 2C31DA8CD5BD419D93FF32405C2B08AE 00:13:52.984 UUID: 2c31da8c-d5bd-419d-93ff-32405c2b08ae 00:13:52.984 Thin Provisioning: Not Supported 00:13:52.984 Per-NS Atomic Units: Yes 00:13:52.984 Atomic Boundary Size (Normal): 0 00:13:52.984 Atomic Boundary Size (PFail): 0 00:13:52.984 Atomic Boundary Offset: 0 00:13:52.984 Maximum Single Source Range Length: 65535 00:13:52.984 Maximum Copy Length: 65535 00:13:52.984 Maximum Source Range Count: 1 00:13:52.984 NGUID/EUI64 Never Reused: No 00:13:52.984 Namespace Write Protected: No 00:13:52.984 Number of LBA Formats: 1 00:13:52.984 Current LBA Format: LBA Format #00 00:13:52.984 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:52.984 00:13:52.984 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:52.984 [2024-11-20 15:22:56.840379] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.253 Initializing NVMe Controllers 00:13:58.253 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:58.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:58.253 Initialization complete. Launching workers. 00:13:58.253 ======================================================== 00:13:58.253 Latency(us) 00:13:58.253 Device Information : IOPS MiB/s Average min max 00:13:58.253 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39959.40 156.09 3203.27 958.08 8603.60 00:13:58.253 ======================================================== 00:13:58.253 Total : 39959.40 156.09 3203.27 958.08 8603.60 00:13:58.253 00:13:58.253 [2024-11-20 15:23:01.941206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.253 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:58.512 [2024-11-20 15:23:02.171905] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:03.786 Initializing NVMe Controllers 00:14:03.786 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:03.786 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:03.786 Initialization complete. Launching workers. 00:14:03.786 ======================================================== 00:14:03.787 Latency(us) 00:14:03.787 Device Information : IOPS MiB/s Average min max 00:14:03.787 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.56 156.05 3204.98 970.31 10596.42 00:14:03.787 ======================================================== 00:14:03.787 Total : 39949.56 156.05 3204.98 970.31 10596.42 00:14:03.787 00:14:03.787 [2024-11-20 15:23:07.196763] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:03.787 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:03.787 [2024-11-20 15:23:07.400757] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.061 [2024-11-20 15:23:12.547050] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.061 Initializing NVMe Controllers 00:14:09.061 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.061 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.061 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:09.061 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:09.061 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:09.061 Initialization complete. Launching workers. 00:14:09.061 Starting thread on core 2 00:14:09.061 Starting thread on core 3 00:14:09.061 Starting thread on core 1 00:14:09.061 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:09.061 [2024-11-20 15:23:12.845357] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.257 [2024-11-20 15:23:16.520735] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.257 Initializing NVMe Controllers 00:14:13.257 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.257 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.257 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:13.257 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:13.257 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:13.257 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:13.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:13.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:13.257 Initialization complete. Launching workers. 00:14:13.257 Starting thread on core 1 with urgent priority queue 00:14:13.257 Starting thread on core 2 with urgent priority queue 00:14:13.257 Starting thread on core 3 with urgent priority queue 00:14:13.257 Starting thread on core 0 with urgent priority queue 00:14:13.257 SPDK bdev Controller (SPDK2 ) core 0: 7914.67 IO/s 12.63 secs/100000 ios 00:14:13.257 SPDK bdev Controller (SPDK2 ) core 1: 8828.67 IO/s 11.33 secs/100000 ios 00:14:13.257 SPDK bdev Controller (SPDK2 ) core 2: 7067.67 IO/s 14.15 secs/100000 ios 00:14:13.257 SPDK bdev Controller (SPDK2 ) core 3: 10663.67 IO/s 9.38 secs/100000 ios 00:14:13.257 ======================================================== 00:14:13.257 00:14:13.257 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:13.257 [2024-11-20 15:23:16.803631] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.258 Initializing NVMe Controllers 00:14:13.258 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.258 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.258 Namespace ID: 1 size: 0GB 00:14:13.258 Initialization complete. 00:14:13.258 INFO: using host memory buffer for IO 00:14:13.258 Hello world! 00:14:13.258 [2024-11-20 15:23:16.813699] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.258 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:13.258 [2024-11-20 15:23:17.099941] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:14.637 Initializing NVMe Controllers 00:14:14.637 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.637 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.637 Initialization complete. Launching workers. 00:14:14.637 submit (in ns) avg, min, max = 7251.0, 3265.2, 5994544.3 00:14:14.637 complete (in ns) avg, min, max = 20730.6, 1806.1, 4020807.8 00:14:14.637 00:14:14.637 Submit histogram 00:14:14.637 ================ 00:14:14.637 Range in us Cumulative Count 00:14:14.637 3.256 - 3.270: 0.0061% ( 1) 00:14:14.637 3.270 - 3.283: 0.1535% ( 24) 00:14:14.637 3.283 - 3.297: 1.5898% ( 234) 00:14:14.637 3.297 - 3.311: 5.8314% ( 691) 00:14:14.637 3.311 - 3.325: 11.8163% ( 975) 00:14:14.637 3.325 - 3.339: 17.9117% ( 993) 00:14:14.637 3.339 - 3.353: 24.6025% ( 1090) 00:14:14.637 3.353 - 3.367: 30.5383% ( 967) 00:14:14.637 3.367 - 3.381: 35.5963% ( 824) 00:14:14.637 3.381 - 3.395: 41.4892% ( 960) 00:14:14.637 3.395 - 3.409: 46.3385% ( 790) 00:14:14.637 3.409 - 3.423: 49.8128% ( 566) 00:14:14.637 3.423 - 3.437: 53.5633% ( 611) 00:14:14.637 3.437 - 3.450: 59.5728% ( 979) 00:14:14.637 3.450 - 3.464: 66.8099% ( 1179) 00:14:14.637 3.464 - 3.478: 71.1436% ( 706) 00:14:14.637 3.478 - 3.492: 75.9069% ( 776) 00:14:14.637 3.492 - 3.506: 80.5721% ( 760) 00:14:14.637 3.506 - 3.520: 83.6290% ( 498) 00:14:14.637 3.520 - 3.534: 85.8265% ( 358) 00:14:14.637 3.534 - 3.548: 86.9192% ( 178) 00:14:14.637 3.548 - 3.562: 87.4716% ( 90) 00:14:14.637 3.562 - 3.590: 88.2082% ( 120) 00:14:14.637 3.590 - 3.617: 89.6630% ( 237) 00:14:14.637 3.617 - 3.645: 91.2590% ( 260) 00:14:14.637 3.645 - 3.673: 92.8120% ( 253) 00:14:14.637 3.673 - 3.701: 94.5614% ( 285) 00:14:14.637 3.701 - 3.729: 96.4029% ( 300) 00:14:14.637 3.729 - 3.757: 97.7472% ( 219) 00:14:14.637 3.757 - 3.784: 98.5145% ( 125) 00:14:14.637 3.784 - 3.812: 98.9810% ( 76) 00:14:14.637 3.812 - 3.840: 99.3309% ( 57) 00:14:14.637 3.840 - 3.868: 99.5212% ( 31) 00:14:14.637 3.868 - 3.896: 99.5949% ( 12) 00:14:14.637 3.896 - 3.923: 99.6133% ( 3) 00:14:14.637 3.923 - 3.951: 99.6194% ( 1) 00:14:14.637 3.979 - 4.007: 99.6256% ( 1) 00:14:14.637 4.007 - 4.035: 99.6378% ( 2) 00:14:14.637 4.063 - 4.090: 99.6440% ( 1) 00:14:14.637 4.146 - 4.174: 99.6563% ( 2) 00:14:14.637 5.064 - 5.092: 99.6624% ( 1) 00:14:14.637 5.370 - 5.398: 99.6747% ( 2) 00:14:14.637 5.426 - 5.454: 99.6869% ( 2) 00:14:14.637 5.510 - 5.537: 99.6931% ( 1) 00:14:14.637 5.565 - 5.593: 99.6992% ( 1) 00:14:14.637 5.593 - 5.621: 99.7054% ( 1) 00:14:14.637 5.649 - 5.677: 99.7115% ( 1) 00:14:14.637 5.816 - 5.843: 99.7176% ( 1) 00:14:14.637 5.843 - 5.871: 99.7238% ( 1) 00:14:14.637 6.122 - 6.150: 99.7299% ( 1) 00:14:14.637 6.372 - 6.400: 99.7361% ( 1) 00:14:14.637 6.400 - 6.428: 99.7422% ( 1) 00:14:14.637 6.483 - 6.511: 99.7483% ( 1) 00:14:14.637 6.790 - 6.817: 99.7545% ( 1) 00:14:14.637 6.817 - 6.845: 99.7606% ( 1) 00:14:14.638 6.901 - 6.929: 99.7667% ( 1) 00:14:14.638 6.929 - 6.957: 99.7729% ( 1) 00:14:14.638 7.012 - 7.040: 99.7790% ( 1) 00:14:14.638 7.096 - 7.123: 99.7852% ( 1) 00:14:14.638 7.179 - 7.235: 99.7913% ( 1) 00:14:14.638 7.235 - 7.290: 99.7974% ( 1) 00:14:14.638 7.346 - 7.402: 99.8036% ( 1) 00:14:14.638 7.402 - 7.457: 99.8097% ( 1) 00:14:14.638 7.457 - 7.513: 99.8158% ( 1) 00:14:14.638 7.513 - 7.569: 99.8220% ( 1) 00:14:14.638 7.569 - 7.624: 99.8281% ( 1) 00:14:14.638 7.680 - 7.736: 99.8343% ( 1) 00:14:14.638 7.736 - 7.791: 99.8404% ( 1) 00:14:14.638 7.847 - 7.903: 99.8465% ( 1) 00:14:14.638 7.903 - 7.958: 99.8527% ( 1) 00:14:14.638 8.014 - 8.070: 99.8650% ( 2) 00:14:14.638 8.125 - 8.181: 99.8711% ( 1) 00:14:14.638 8.237 - 8.292: 99.8772% ( 1) 00:14:14.638 8.403 - 8.459: 99.8834% ( 1) 00:14:14.638 8.626 - 8.682: 99.8895% ( 1) 00:14:14.638 8.904 - 8.960: 99.8956% ( 1) 00:14:14.638 9.183 - 9.238: 99.9018% ( 1) 00:14:14.638 9.461 - 9.517: 99.9079% ( 1) 00:14:14.638 3989.148 - 4017.642: 99.9939% ( 14) 00:14:14.638 [2024-11-20 15:23:18.193987] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:14.638 5983.722 - 6012.216: 100.0000% ( 1) 00:14:14.638 00:14:14.638 Complete histogram 00:14:14.638 ================== 00:14:14.638 Range in us Cumulative Count 00:14:14.638 1.795 - 1.809: 0.0123% ( 2) 00:14:14.638 1.809 - 1.823: 7.3844% ( 1201) 00:14:14.638 1.823 - 1.837: 63.5504% ( 9150) 00:14:14.638 1.837 - 1.850: 81.9716% ( 3001) 00:14:14.638 1.850 - 1.864: 87.3550% ( 877) 00:14:14.638 1.864 - 1.878: 89.0123% ( 270) 00:14:14.638 1.878 - 1.892: 91.6825% ( 435) 00:14:14.638 1.892 - 1.906: 95.9610% ( 697) 00:14:14.638 1.906 - 1.920: 97.5938% ( 266) 00:14:14.638 1.920 - 1.934: 98.1462% ( 90) 00:14:14.638 1.934 - 1.948: 98.4224% ( 45) 00:14:14.638 1.948 - 1.962: 98.7171% ( 48) 00:14:14.638 1.962 - 1.976: 99.0056% ( 47) 00:14:14.638 1.976 - 1.990: 99.1959% ( 31) 00:14:14.638 1.990 - 2.003: 99.2941% ( 16) 00:14:14.638 2.003 - 2.017: 99.3125% ( 3) 00:14:14.638 2.017 - 2.031: 99.3248% ( 2) 00:14:14.638 2.031 - 2.045: 99.3371% ( 2) 00:14:14.638 2.045 - 2.059: 99.3555% ( 3) 00:14:14.638 2.101 - 2.115: 99.3616% ( 1) 00:14:14.638 2.157 - 2.170: 99.3677% ( 1) 00:14:14.638 2.184 - 2.198: 99.3739% ( 1) 00:14:14.638 2.198 - 2.212: 99.3800% ( 1) 00:14:14.638 2.226 - 2.240: 99.3862% ( 1) 00:14:14.638 2.351 - 2.365: 99.3923% ( 1) 00:14:14.638 3.701 - 3.729: 99.3984% ( 1) 00:14:14.638 3.812 - 3.840: 99.4107% ( 2) 00:14:14.638 3.923 - 3.951: 99.4169% ( 1) 00:14:14.638 4.536 - 4.563: 99.4230% ( 1) 00:14:14.638 4.703 - 4.730: 99.4291% ( 1) 00:14:14.638 4.897 - 4.925: 99.4353% ( 1) 00:14:14.638 5.092 - 5.120: 99.4414% ( 1) 00:14:14.638 5.148 - 5.176: 99.4475% ( 1) 00:14:14.638 5.203 - 5.231: 99.4537% ( 1) 00:14:14.638 5.370 - 5.398: 99.4598% ( 1) 00:14:14.638 5.454 - 5.482: 99.4660% ( 1) 00:14:14.638 5.732 - 5.760: 99.4721% ( 1) 00:14:14.638 6.122 - 6.150: 99.4782% ( 1) 00:14:14.638 6.372 - 6.400: 99.4844% ( 1) 00:14:14.638 6.734 - 6.762: 99.4905% ( 1) 00:14:14.638 6.929 - 6.957: 99.4967% ( 1) 00:14:14.638 7.290 - 7.346: 99.5028% ( 1) 00:14:14.638 7.346 - 7.402: 99.5089% ( 1) 00:14:14.638 7.402 - 7.457: 99.5151% ( 1) 00:14:14.638 9.071 - 9.127: 99.5212% ( 1) 00:14:14.638 40.737 - 40.960: 99.5273% ( 1) 00:14:14.638 3989.148 - 4017.642: 99.9939% ( 76) 00:14:14.638 4017.642 - 4046.136: 100.0000% ( 1) 00:14:14.638 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:14.638 [ 00:14:14.638 { 00:14:14.638 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:14.638 "subtype": "Discovery", 00:14:14.638 "listen_addresses": [], 00:14:14.638 "allow_any_host": true, 00:14:14.638 "hosts": [] 00:14:14.638 }, 00:14:14.638 { 00:14:14.638 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:14.638 "subtype": "NVMe", 00:14:14.638 "listen_addresses": [ 00:14:14.638 { 00:14:14.638 "trtype": "VFIOUSER", 00:14:14.638 "adrfam": "IPv4", 00:14:14.638 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:14.638 "trsvcid": "0" 00:14:14.638 } 00:14:14.638 ], 00:14:14.638 "allow_any_host": true, 00:14:14.638 "hosts": [], 00:14:14.638 "serial_number": "SPDK1", 00:14:14.638 "model_number": "SPDK bdev Controller", 00:14:14.638 "max_namespaces": 32, 00:14:14.638 "min_cntlid": 1, 00:14:14.638 "max_cntlid": 65519, 00:14:14.638 "namespaces": [ 00:14:14.638 { 00:14:14.638 "nsid": 1, 00:14:14.638 "bdev_name": "Malloc1", 00:14:14.638 "name": "Malloc1", 00:14:14.638 "nguid": "7E201E657B854D528242244AD50A1215", 00:14:14.638 "uuid": "7e201e65-7b85-4d52-8242-244ad50a1215" 00:14:14.638 }, 00:14:14.638 { 00:14:14.638 "nsid": 2, 00:14:14.638 "bdev_name": "Malloc3", 00:14:14.638 "name": "Malloc3", 00:14:14.638 "nguid": "DC8A827839C54F6A9585EF1B6D04A9D9", 00:14:14.638 "uuid": "dc8a8278-39c5-4f6a-9585-ef1b6d04a9d9" 00:14:14.638 } 00:14:14.638 ] 00:14:14.638 }, 00:14:14.638 { 00:14:14.638 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:14.638 "subtype": "NVMe", 00:14:14.638 "listen_addresses": [ 00:14:14.638 { 00:14:14.638 "trtype": "VFIOUSER", 00:14:14.638 "adrfam": "IPv4", 00:14:14.638 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:14.638 "trsvcid": "0" 00:14:14.638 } 00:14:14.638 ], 00:14:14.638 "allow_any_host": true, 00:14:14.638 "hosts": [], 00:14:14.638 "serial_number": "SPDK2", 00:14:14.638 "model_number": "SPDK bdev Controller", 00:14:14.638 "max_namespaces": 32, 00:14:14.638 "min_cntlid": 1, 00:14:14.638 "max_cntlid": 65519, 00:14:14.638 "namespaces": [ 00:14:14.638 { 00:14:14.638 "nsid": 1, 00:14:14.638 "bdev_name": "Malloc2", 00:14:14.638 "name": "Malloc2", 00:14:14.638 "nguid": "2C31DA8CD5BD419D93FF32405C2B08AE", 00:14:14.638 "uuid": "2c31da8c-d5bd-419d-93ff-32405c2b08ae" 00:14:14.638 } 00:14:14.638 ] 00:14:14.638 } 00:14:14.638 ] 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2131059 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:14.638 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:14.639 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:14.639 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:14.639 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:14.639 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:14.898 [2024-11-20 15:23:18.596699] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:14.898 Malloc4 00:14:14.898 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:15.157 [2024-11-20 15:23:18.833533] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.157 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:15.157 Asynchronous Event Request test 00:14:15.157 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.157 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.157 Registering asynchronous event callbacks... 00:14:15.157 Starting namespace attribute notice tests for all controllers... 00:14:15.157 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:15.157 aer_cb - Changed Namespace 00:14:15.157 Cleaning up... 00:14:15.157 [ 00:14:15.157 { 00:14:15.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:15.157 "subtype": "Discovery", 00:14:15.157 "listen_addresses": [], 00:14:15.157 "allow_any_host": true, 00:14:15.157 "hosts": [] 00:14:15.157 }, 00:14:15.157 { 00:14:15.157 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:15.157 "subtype": "NVMe", 00:14:15.157 "listen_addresses": [ 00:14:15.157 { 00:14:15.157 "trtype": "VFIOUSER", 00:14:15.157 "adrfam": "IPv4", 00:14:15.157 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:15.157 "trsvcid": "0" 00:14:15.157 } 00:14:15.157 ], 00:14:15.157 "allow_any_host": true, 00:14:15.157 "hosts": [], 00:14:15.157 "serial_number": "SPDK1", 00:14:15.157 "model_number": "SPDK bdev Controller", 00:14:15.157 "max_namespaces": 32, 00:14:15.157 "min_cntlid": 1, 00:14:15.157 "max_cntlid": 65519, 00:14:15.157 "namespaces": [ 00:14:15.157 { 00:14:15.157 "nsid": 1, 00:14:15.157 "bdev_name": "Malloc1", 00:14:15.157 "name": "Malloc1", 00:14:15.157 "nguid": "7E201E657B854D528242244AD50A1215", 00:14:15.157 "uuid": "7e201e65-7b85-4d52-8242-244ad50a1215" 00:14:15.157 }, 00:14:15.157 { 00:14:15.157 "nsid": 2, 00:14:15.157 "bdev_name": "Malloc3", 00:14:15.157 "name": "Malloc3", 00:14:15.157 "nguid": "DC8A827839C54F6A9585EF1B6D04A9D9", 00:14:15.157 "uuid": "dc8a8278-39c5-4f6a-9585-ef1b6d04a9d9" 00:14:15.157 } 00:14:15.157 ] 00:14:15.157 }, 00:14:15.157 { 00:14:15.157 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:15.157 "subtype": "NVMe", 00:14:15.157 "listen_addresses": [ 00:14:15.157 { 00:14:15.157 "trtype": "VFIOUSER", 00:14:15.157 "adrfam": "IPv4", 00:14:15.157 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:15.157 "trsvcid": "0" 00:14:15.157 } 00:14:15.157 ], 00:14:15.157 "allow_any_host": true, 00:14:15.157 "hosts": [], 00:14:15.157 "serial_number": "SPDK2", 00:14:15.157 "model_number": "SPDK bdev Controller", 00:14:15.157 "max_namespaces": 32, 00:14:15.157 "min_cntlid": 1, 00:14:15.157 "max_cntlid": 65519, 00:14:15.157 "namespaces": [ 00:14:15.157 { 00:14:15.157 "nsid": 1, 00:14:15.157 "bdev_name": "Malloc2", 00:14:15.157 "name": "Malloc2", 00:14:15.157 "nguid": "2C31DA8CD5BD419D93FF32405C2B08AE", 00:14:15.157 "uuid": "2c31da8c-d5bd-419d-93ff-32405c2b08ae" 00:14:15.157 }, 00:14:15.157 { 00:14:15.157 "nsid": 2, 00:14:15.157 "bdev_name": "Malloc4", 00:14:15.157 "name": "Malloc4", 00:14:15.157 "nguid": "B2580BE519CB4ACC9CDC615469FDF7B8", 00:14:15.157 "uuid": "b2580be5-19cb-4acc-9cdc-615469fdf7b8" 00:14:15.157 } 00:14:15.157 ] 00:14:15.157 } 00:14:15.157 ] 00:14:15.157 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2131059 00:14:15.157 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:15.157 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2123208 00:14:15.157 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2123208 ']' 00:14:15.157 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2123208 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123208 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123208' 00:14:15.418 killing process with pid 2123208 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2123208 00:14:15.418 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2123208 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2131291 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2131291' 00:14:15.680 Process pid: 2131291 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2131291 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2131291 ']' 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.680 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.680 [2024-11-20 15:23:19.406119] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:15.680 [2024-11-20 15:23:19.407009] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:15.680 [2024-11-20 15:23:19.407050] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.680 [2024-11-20 15:23:19.481960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.680 [2024-11-20 15:23:19.519342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.680 [2024-11-20 15:23:19.519383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.680 [2024-11-20 15:23:19.519390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.680 [2024-11-20 15:23:19.519396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.680 [2024-11-20 15:23:19.519401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.680 [2024-11-20 15:23:19.521027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.680 [2024-11-20 15:23:19.521139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.680 [2024-11-20 15:23:19.521248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.680 [2024-11-20 15:23:19.521249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.939 [2024-11-20 15:23:19.589563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:15.939 [2024-11-20 15:23:19.589916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:15.939 [2024-11-20 15:23:19.590412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:15.939 [2024-11-20 15:23:19.590642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:15.939 [2024-11-20 15:23:19.590680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:15.939 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.939 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:15.939 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:16.878 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:17.137 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:17.137 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:17.137 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.137 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:17.137 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.137 Malloc1 00:14:17.397 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:17.397 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:17.656 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:17.914 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.914 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:17.914 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:18.173 Malloc2 00:14:18.173 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:18.433 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:18.433 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2131291 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2131291 ']' 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2131291 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131291 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131291' 00:14:18.692 killing process with pid 2131291 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2131291 00:14:18.692 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2131291 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:18.951 00:14:18.951 real 0m51.663s 00:14:18.951 user 3m20.002s 00:14:18.951 sys 0m3.257s 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:18.951 ************************************ 00:14:18.951 END TEST nvmf_vfio_user 00:14:18.951 ************************************ 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.951 ************************************ 00:14:18.951 START TEST nvmf_vfio_user_nvme_compliance 00:14:18.951 ************************************ 00:14:18.951 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:19.212 * Looking for test storage... 00:14:19.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.212 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.212 --rc genhtml_branch_coverage=1 00:14:19.212 --rc genhtml_function_coverage=1 00:14:19.212 --rc genhtml_legend=1 00:14:19.212 --rc geninfo_all_blocks=1 00:14:19.212 --rc geninfo_unexecuted_blocks=1 00:14:19.212 00:14:19.212 ' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.212 --rc genhtml_branch_coverage=1 00:14:19.212 --rc genhtml_function_coverage=1 00:14:19.212 --rc genhtml_legend=1 00:14:19.212 --rc geninfo_all_blocks=1 00:14:19.212 --rc geninfo_unexecuted_blocks=1 00:14:19.212 00:14:19.212 ' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.212 --rc genhtml_branch_coverage=1 00:14:19.212 --rc genhtml_function_coverage=1 00:14:19.212 --rc genhtml_legend=1 00:14:19.212 --rc geninfo_all_blocks=1 00:14:19.212 --rc geninfo_unexecuted_blocks=1 00:14:19.212 00:14:19.212 ' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.212 --rc genhtml_branch_coverage=1 00:14:19.212 --rc genhtml_function_coverage=1 00:14:19.212 --rc genhtml_legend=1 00:14:19.212 --rc geninfo_all_blocks=1 00:14:19.212 --rc geninfo_unexecuted_blocks=1 00:14:19.212 00:14:19.212 ' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.212 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2131841 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2131841' 00:14:19.213 Process pid: 2131841 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2131841 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2131841 ']' 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.213 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:19.213 [2024-11-20 15:23:23.089602] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:14:19.213 [2024-11-20 15:23:23.089651] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.472 [2024-11-20 15:23:23.165276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.472 [2024-11-20 15:23:23.205098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.472 [2024-11-20 15:23:23.205137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.473 [2024-11-20 15:23:23.205143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.473 [2024-11-20 15:23:23.205149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.473 [2024-11-20 15:23:23.205154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.473 [2024-11-20 15:23:23.206588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.473 [2024-11-20 15:23:23.206694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.473 [2024-11-20 15:23:23.206695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.473 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.473 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:19.473 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:20.410 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:20.410 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:20.411 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:20.411 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.411 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.670 malloc0 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.670 15:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:20.670 00:14:20.670 00:14:20.670 CUnit - A unit testing framework for C - Version 2.1-3 00:14:20.670 http://cunit.sourceforge.net/ 00:14:20.670 00:14:20.670 00:14:20.670 Suite: nvme_compliance 00:14:20.670 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 15:23:24.549410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.670 [2024-11-20 15:23:24.550764] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:20.670 [2024-11-20 15:23:24.550781] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:20.670 [2024-11-20 15:23:24.550787] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:20.670 [2024-11-20 15:23:24.552435] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.933 passed 00:14:20.933 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 15:23:24.631005] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.933 [2024-11-20 15:23:24.634023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.933 passed 00:14:20.933 Test: admin_identify_ns ...[2024-11-20 15:23:24.713864] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.933 [2024-11-20 15:23:24.774958] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:20.933 [2024-11-20 15:23:24.782957] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:20.933 [2024-11-20 15:23:24.804061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.933 passed 00:14:21.191 Test: admin_get_features_mandatory_features ...[2024-11-20 15:23:24.878228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.191 [2024-11-20 15:23:24.882255] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.191 passed 00:14:21.191 Test: admin_get_features_optional_features ...[2024-11-20 15:23:24.958778] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.191 [2024-11-20 15:23:24.963803] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.191 passed 00:14:21.191 Test: admin_set_features_number_of_queues ...[2024-11-20 15:23:25.039674] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.450 [2024-11-20 15:23:25.148034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.450 passed 00:14:21.450 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 15:23:25.221966] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.450 [2024-11-20 15:23:25.224995] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.450 passed 00:14:21.450 Test: admin_get_log_page_with_lpo ...[2024-11-20 15:23:25.301755] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.708 [2024-11-20 15:23:25.369966] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:21.708 [2024-11-20 15:23:25.383013] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.708 passed 00:14:21.708 Test: fabric_property_get ...[2024-11-20 15:23:25.459878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.708 [2024-11-20 15:23:25.461134] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:21.708 [2024-11-20 15:23:25.462901] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.708 passed 00:14:21.708 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 15:23:25.541412] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.708 [2024-11-20 15:23:25.542647] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:21.708 [2024-11-20 15:23:25.544428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.708 passed 00:14:21.968 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 15:23:25.622255] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.968 [2024-11-20 15:23:25.705953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:21.968 [2024-11-20 15:23:25.721953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:21.968 [2024-11-20 15:23:25.727034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.968 passed 00:14:21.968 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 15:23:25.802214] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.968 [2024-11-20 15:23:25.803455] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:21.968 [2024-11-20 15:23:25.805236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.968 passed 00:14:22.227 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 15:23:25.881029] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.227 [2024-11-20 15:23:25.957968] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:22.227 [2024-11-20 15:23:25.981967] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:22.227 [2024-11-20 15:23:25.987031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.227 passed 00:14:22.227 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 15:23:26.063768] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.227 [2024-11-20 15:23:26.065016] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:22.227 [2024-11-20 15:23:26.065042] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:22.227 [2024-11-20 15:23:26.066794] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.227 passed 00:14:22.486 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 15:23:26.144693] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.486 [2024-11-20 15:23:26.236957] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:22.486 [2024-11-20 15:23:26.244959] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:22.486 [2024-11-20 15:23:26.252965] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:22.486 [2024-11-20 15:23:26.260967] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:22.486 [2024-11-20 15:23:26.290032] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.486 passed 00:14:22.486 Test: admin_create_io_sq_verify_pc ...[2024-11-20 15:23:26.363907] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.486 [2024-11-20 15:23:26.378963] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:22.745 [2024-11-20 15:23:26.396090] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.745 passed 00:14:22.745 Test: admin_create_io_qp_max_qps ...[2024-11-20 15:23:26.474626] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.682 [2024-11-20 15:23:27.571959] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:24.250 [2024-11-20 15:23:27.951908] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.250 passed 00:14:24.250 Test: admin_create_io_sq_shared_cq ...[2024-11-20 15:23:28.027885] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.509 [2024-11-20 15:23:28.160954] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:24.509 [2024-11-20 15:23:28.198005] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:24.509 passed 00:14:24.509 00:14:24.509 Run Summary: Type Total Ran Passed Failed Inactive 00:14:24.509 suites 1 1 n/a 0 0 00:14:24.509 tests 18 18 18 0 0 00:14:24.509 asserts 360 360 360 0 n/a 00:14:24.509 00:14:24.509 Elapsed time = 1.499 seconds 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2131841 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2131841 ']' 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2131841 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131841 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131841' 00:14:24.509 killing process with pid 2131841 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2131841 00:14:24.509 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2131841 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:24.769 00:14:24.769 real 0m5.647s 00:14:24.769 user 0m15.762s 00:14:24.769 sys 0m0.533s 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:24.769 ************************************ 00:14:24.769 END TEST nvmf_vfio_user_nvme_compliance 00:14:24.769 ************************************ 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.769 ************************************ 00:14:24.769 START TEST nvmf_vfio_user_fuzz 00:14:24.769 ************************************ 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:24.769 * Looking for test storage... 00:14:24.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.769 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.030 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2132832 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2132832' 00:14:25.031 Process pid: 2132832 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2132832 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2132832 ']' 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.031 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:25.290 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.290 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:25.290 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.321 malloc0 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:26.321 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:58.394 Fuzzing completed. Shutting down the fuzz application 00:14:58.394 00:14:58.394 Dumping successful admin opcodes: 00:14:58.394 8, 9, 10, 24, 00:14:58.394 Dumping successful io opcodes: 00:14:58.394 0, 00:14:58.394 NS: 0x20000081ef00 I/O qp, Total commands completed: 1117799, total successful commands: 4399, random_seed: 1644799488 00:14:58.394 NS: 0x20000081ef00 admin qp, Total commands completed: 276107, total successful commands: 2230, random_seed: 4005755648 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2132832 ']' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2132832' 00:14:58.394 killing process with pid 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2132832 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:58.394 00:14:58.394 real 0m32.223s 00:14:58.394 user 0m34.267s 00:14:58.394 sys 0m27.262s 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:58.394 ************************************ 00:14:58.394 END TEST nvmf_vfio_user_fuzz 00:14:58.394 ************************************ 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.394 ************************************ 00:14:58.394 START TEST nvmf_auth_target 00:14:58.394 ************************************ 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:58.394 * Looking for test storage... 00:14:58.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.394 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.394 --rc genhtml_branch_coverage=1 00:14:58.394 --rc genhtml_function_coverage=1 00:14:58.394 --rc genhtml_legend=1 00:14:58.394 --rc geninfo_all_blocks=1 00:14:58.394 --rc geninfo_unexecuted_blocks=1 00:14:58.394 00:14:58.394 ' 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.394 --rc genhtml_branch_coverage=1 00:14:58.394 --rc genhtml_function_coverage=1 00:14:58.394 --rc genhtml_legend=1 00:14:58.394 --rc geninfo_all_blocks=1 00:14:58.394 --rc geninfo_unexecuted_blocks=1 00:14:58.394 00:14:58.394 ' 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.394 --rc genhtml_branch_coverage=1 00:14:58.394 --rc genhtml_function_coverage=1 00:14:58.394 --rc genhtml_legend=1 00:14:58.394 --rc geninfo_all_blocks=1 00:14:58.394 --rc geninfo_unexecuted_blocks=1 00:14:58.394 00:14:58.394 ' 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.394 --rc genhtml_branch_coverage=1 00:14:58.394 --rc genhtml_function_coverage=1 00:14:58.394 --rc genhtml_legend=1 00:14:58.394 --rc geninfo_all_blocks=1 00:14:58.394 --rc geninfo_unexecuted_blocks=1 00:14:58.394 00:14:58.394 ' 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.394 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:58.395 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:03.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.668 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:03.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:03.669 Found net devices under 0000:86:00.0: cvl_0_0 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:03.669 Found net devices under 0000:86:00.1: cvl_0_1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:03.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:15:03.669 00:15:03.669 --- 10.0.0.2 ping statistics --- 00:15:03.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.669 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:15:03.669 00:15:03.669 --- 10.0.0.1 ping statistics --- 00:15:03.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.669 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:03.669 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2141460 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2141460 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2141460 ']' 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2141487 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d9db9390e3acb3a7e17010f809e676224fa3eacc3230ffe1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yNk 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d9db9390e3acb3a7e17010f809e676224fa3eacc3230ffe1 0 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d9db9390e3acb3a7e17010f809e676224fa3eacc3230ffe1 0 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d9db9390e3acb3a7e17010f809e676224fa3eacc3230ffe1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yNk 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yNk 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.yNk 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d79cf277b00607828a52d9f18a97ec6e6fb2f4ed075f8123091fe5047a8f6392 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.da9 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d79cf277b00607828a52d9f18a97ec6e6fb2f4ed075f8123091fe5047a8f6392 3 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d79cf277b00607828a52d9f18a97ec6e6fb2f4ed075f8123091fe5047a8f6392 3 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d79cf277b00607828a52d9f18a97ec6e6fb2f4ed075f8123091fe5047a8f6392 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.da9 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.da9 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.da9 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=44b45ef43476c5b4253682ab89820f1e 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZWF 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 44b45ef43476c5b4253682ab89820f1e 1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 44b45ef43476c5b4253682ab89820f1e 1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=44b45ef43476c5b4253682ab89820f1e 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZWF 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZWF 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ZWF 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=83d413ce91555cf8c1d2e98000096293940dca0b6da83d1c 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:03.669 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zTo 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 83d413ce91555cf8c1d2e98000096293940dca0b6da83d1c 2 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 83d413ce91555cf8c1d2e98000096293940dca0b6da83d1c 2 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=83d413ce91555cf8c1d2e98000096293940dca0b6da83d1c 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zTo 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zTo 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.zTo 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:03.670 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43281f1ead3594891f46a86791c468812c60fdb856b8b917 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yDY 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43281f1ead3594891f46a86791c468812c60fdb856b8b917 2 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43281f1ead3594891f46a86791c468812c60fdb856b8b917 2 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43281f1ead3594891f46a86791c468812c60fdb856b8b917 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yDY 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yDY 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yDY 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d384ce9418fe138cc3c5f60026185a01 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p1z 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d384ce9418fe138cc3c5f60026185a01 1 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d384ce9418fe138cc3c5f60026185a01 1 00:15:03.929 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d384ce9418fe138cc3c5f60026185a01 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p1z 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p1z 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.p1z 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b46fc6b024bc394c9f8706275114fa1cbfc229a7f5087bab4a688dbc9f3d914 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.M3E 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b46fc6b024bc394c9f8706275114fa1cbfc229a7f5087bab4a688dbc9f3d914 3 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b46fc6b024bc394c9f8706275114fa1cbfc229a7f5087bab4a688dbc9f3d914 3 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b46fc6b024bc394c9f8706275114fa1cbfc229a7f5087bab4a688dbc9f3d914 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.M3E 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.M3E 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.M3E 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2141460 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2141460 ']' 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.930 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2141487 /var/tmp/host.sock 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2141487 ']' 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:04.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.189 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yNk 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yNk 00:15:04.448 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yNk 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.da9 ]] 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.da9 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.da9 00:15:04.707 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.da9 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ZWF 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ZWF 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ZWF 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.zTo ]] 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zTo 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zTo 00:15:04.966 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zTo 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yDY 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yDY 00:15:05.225 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yDY 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.p1z ]] 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p1z 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p1z 00:15:05.483 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p1z 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M3E 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.M3E 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.M3E 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.742 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.001 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.259 00:15:06.259 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.259 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.259 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.519 { 00:15:06.519 "cntlid": 1, 00:15:06.519 "qid": 0, 00:15:06.519 "state": "enabled", 00:15:06.519 "thread": "nvmf_tgt_poll_group_000", 00:15:06.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.519 "listen_address": { 00:15:06.519 "trtype": "TCP", 00:15:06.519 "adrfam": "IPv4", 00:15:06.519 "traddr": "10.0.0.2", 00:15:06.519 "trsvcid": "4420" 00:15:06.519 }, 00:15:06.519 "peer_address": { 00:15:06.519 "trtype": "TCP", 00:15:06.519 "adrfam": "IPv4", 00:15:06.519 "traddr": "10.0.0.1", 00:15:06.519 "trsvcid": "51502" 00:15:06.519 }, 00:15:06.519 "auth": { 00:15:06.519 "state": "completed", 00:15:06.519 "digest": "sha256", 00:15:06.519 "dhgroup": "null" 00:15:06.519 } 00:15:06.519 } 00:15:06.519 ]' 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.519 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.778 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:06.778 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.345 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.604 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.862 00:15:07.862 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.862 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.862 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.121 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.121 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.121 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.121 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.121 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.122 { 00:15:08.122 "cntlid": 3, 00:15:08.122 "qid": 0, 00:15:08.122 "state": "enabled", 00:15:08.122 "thread": "nvmf_tgt_poll_group_000", 00:15:08.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.122 "listen_address": { 00:15:08.122 "trtype": "TCP", 00:15:08.122 "adrfam": "IPv4", 00:15:08.122 "traddr": "10.0.0.2", 00:15:08.122 "trsvcid": "4420" 00:15:08.122 }, 00:15:08.122 "peer_address": { 00:15:08.122 "trtype": "TCP", 00:15:08.122 "adrfam": "IPv4", 00:15:08.122 "traddr": "10.0.0.1", 00:15:08.122 "trsvcid": "51532" 00:15:08.122 }, 00:15:08.122 "auth": { 00:15:08.122 "state": "completed", 00:15:08.122 "digest": "sha256", 00:15:08.122 "dhgroup": "null" 00:15:08.122 } 00:15:08.122 } 00:15:08.122 ]' 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.122 15:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.380 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.380 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.380 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.380 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:08.380 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.949 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.210 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.469 00:15:09.469 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.469 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.469 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.727 { 00:15:09.727 "cntlid": 5, 00:15:09.727 "qid": 0, 00:15:09.727 "state": "enabled", 00:15:09.727 "thread": "nvmf_tgt_poll_group_000", 00:15:09.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.727 "listen_address": { 00:15:09.727 "trtype": "TCP", 00:15:09.727 "adrfam": "IPv4", 00:15:09.727 "traddr": "10.0.0.2", 00:15:09.727 "trsvcid": "4420" 00:15:09.727 }, 00:15:09.727 "peer_address": { 00:15:09.727 "trtype": "TCP", 00:15:09.727 "adrfam": "IPv4", 00:15:09.727 "traddr": "10.0.0.1", 00:15:09.727 "trsvcid": "51564" 00:15:09.727 }, 00:15:09.727 "auth": { 00:15:09.727 "state": "completed", 00:15:09.727 "digest": "sha256", 00:15:09.727 "dhgroup": "null" 00:15:09.727 } 00:15:09.727 } 00:15:09.727 ]' 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.727 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.986 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.986 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.986 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.986 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:09.986 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:10.554 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.813 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.072 00:15:11.072 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.072 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.072 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.331 { 00:15:11.331 "cntlid": 7, 00:15:11.331 "qid": 0, 00:15:11.331 "state": "enabled", 00:15:11.331 "thread": "nvmf_tgt_poll_group_000", 00:15:11.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.331 "listen_address": { 00:15:11.331 "trtype": "TCP", 00:15:11.331 "adrfam": "IPv4", 00:15:11.331 "traddr": "10.0.0.2", 00:15:11.331 "trsvcid": "4420" 00:15:11.331 }, 00:15:11.331 "peer_address": { 00:15:11.331 "trtype": "TCP", 00:15:11.331 "adrfam": "IPv4", 00:15:11.331 "traddr": "10.0.0.1", 00:15:11.331 "trsvcid": "55258" 00:15:11.331 }, 00:15:11.331 "auth": { 00:15:11.331 "state": "completed", 00:15:11.331 "digest": "sha256", 00:15:11.331 "dhgroup": "null" 00:15:11.331 } 00:15:11.331 } 00:15:11.331 ]' 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.331 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.590 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:11.590 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.159 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.418 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.676 00:15:12.676 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.676 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.676 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.935 { 00:15:12.935 "cntlid": 9, 00:15:12.935 "qid": 0, 00:15:12.935 "state": "enabled", 00:15:12.935 "thread": "nvmf_tgt_poll_group_000", 00:15:12.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.935 "listen_address": { 00:15:12.935 "trtype": "TCP", 00:15:12.935 "adrfam": "IPv4", 00:15:12.935 "traddr": "10.0.0.2", 00:15:12.935 "trsvcid": "4420" 00:15:12.935 }, 00:15:12.935 "peer_address": { 00:15:12.935 "trtype": "TCP", 00:15:12.935 "adrfam": "IPv4", 00:15:12.935 "traddr": "10.0.0.1", 00:15:12.935 "trsvcid": "55290" 00:15:12.935 }, 00:15:12.935 "auth": { 00:15:12.935 "state": "completed", 00:15:12.935 "digest": "sha256", 00:15:12.935 "dhgroup": "ffdhe2048" 00:15:12.935 } 00:15:12.935 } 00:15:12.935 ]' 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.935 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.193 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:13.193 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.760 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.018 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.276 00:15:14.276 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.276 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.276 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.535 { 00:15:14.535 "cntlid": 11, 00:15:14.535 "qid": 0, 00:15:14.535 "state": "enabled", 00:15:14.535 "thread": "nvmf_tgt_poll_group_000", 00:15:14.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.535 "listen_address": { 00:15:14.535 "trtype": "TCP", 00:15:14.535 "adrfam": "IPv4", 00:15:14.535 "traddr": "10.0.0.2", 00:15:14.535 "trsvcid": "4420" 00:15:14.535 }, 00:15:14.535 "peer_address": { 00:15:14.535 "trtype": "TCP", 00:15:14.535 "adrfam": "IPv4", 00:15:14.535 "traddr": "10.0.0.1", 00:15:14.535 "trsvcid": "55308" 00:15:14.535 }, 00:15:14.535 "auth": { 00:15:14.535 "state": "completed", 00:15:14.535 "digest": "sha256", 00:15:14.535 "dhgroup": "ffdhe2048" 00:15:14.535 } 00:15:14.535 } 00:15:14.535 ]' 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.535 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.793 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:14.793 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.360 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.618 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.877 00:15:15.877 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.877 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.877 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.135 { 00:15:16.135 "cntlid": 13, 00:15:16.135 "qid": 0, 00:15:16.135 "state": "enabled", 00:15:16.135 "thread": "nvmf_tgt_poll_group_000", 00:15:16.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.135 "listen_address": { 00:15:16.135 "trtype": "TCP", 00:15:16.135 "adrfam": "IPv4", 00:15:16.135 "traddr": "10.0.0.2", 00:15:16.135 "trsvcid": "4420" 00:15:16.135 }, 00:15:16.135 "peer_address": { 00:15:16.135 "trtype": "TCP", 00:15:16.135 "adrfam": "IPv4", 00:15:16.135 "traddr": "10.0.0.1", 00:15:16.135 "trsvcid": "55326" 00:15:16.135 }, 00:15:16.135 "auth": { 00:15:16.135 "state": "completed", 00:15:16.135 "digest": "sha256", 00:15:16.135 "dhgroup": "ffdhe2048" 00:15:16.135 } 00:15:16.135 } 00:15:16.135 ]' 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.135 15:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.394 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:16.394 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:16.962 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.220 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.479 00:15:17.479 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.479 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.479 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.737 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.737 { 00:15:17.737 "cntlid": 15, 00:15:17.737 "qid": 0, 00:15:17.737 "state": "enabled", 00:15:17.738 "thread": "nvmf_tgt_poll_group_000", 00:15:17.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.738 "listen_address": { 00:15:17.738 "trtype": "TCP", 00:15:17.738 "adrfam": "IPv4", 00:15:17.738 "traddr": "10.0.0.2", 00:15:17.738 "trsvcid": "4420" 00:15:17.738 }, 00:15:17.738 "peer_address": { 00:15:17.738 "trtype": "TCP", 00:15:17.738 "adrfam": "IPv4", 00:15:17.738 "traddr": "10.0.0.1", 00:15:17.738 "trsvcid": "55362" 00:15:17.738 }, 00:15:17.738 "auth": { 00:15:17.738 "state": "completed", 00:15:17.738 "digest": "sha256", 00:15:17.738 "dhgroup": "ffdhe2048" 00:15:17.738 } 00:15:17.738 } 00:15:17.738 ]' 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.738 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.996 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:17.996 15:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.562 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.821 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.080 00:15:19.080 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.080 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.080 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.339 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.339 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.339 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.339 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.339 { 00:15:19.339 "cntlid": 17, 00:15:19.339 "qid": 0, 00:15:19.339 "state": "enabled", 00:15:19.339 "thread": "nvmf_tgt_poll_group_000", 00:15:19.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.339 "listen_address": { 00:15:19.339 "trtype": "TCP", 00:15:19.339 "adrfam": "IPv4", 00:15:19.339 "traddr": "10.0.0.2", 00:15:19.339 "trsvcid": "4420" 00:15:19.339 }, 00:15:19.339 "peer_address": { 00:15:19.339 "trtype": "TCP", 00:15:19.339 "adrfam": "IPv4", 00:15:19.339 "traddr": "10.0.0.1", 00:15:19.339 "trsvcid": "55396" 00:15:19.339 }, 00:15:19.339 "auth": { 00:15:19.339 "state": "completed", 00:15:19.339 "digest": "sha256", 00:15:19.339 "dhgroup": "ffdhe3072" 00:15:19.339 } 00:15:19.339 } 00:15:19.339 ]' 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.339 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.598 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:19.598 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.165 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.424 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.425 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.425 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.425 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.425 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.425 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.684 00:15:20.684 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.684 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.684 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.943 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.943 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.943 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.943 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.943 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.944 { 00:15:20.944 "cntlid": 19, 00:15:20.944 "qid": 0, 00:15:20.944 "state": "enabled", 00:15:20.944 "thread": "nvmf_tgt_poll_group_000", 00:15:20.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.944 "listen_address": { 00:15:20.944 "trtype": "TCP", 00:15:20.944 "adrfam": "IPv4", 00:15:20.944 "traddr": "10.0.0.2", 00:15:20.944 "trsvcid": "4420" 00:15:20.944 }, 00:15:20.944 "peer_address": { 00:15:20.944 "trtype": "TCP", 00:15:20.944 "adrfam": "IPv4", 00:15:20.944 "traddr": "10.0.0.1", 00:15:20.944 "trsvcid": "59346" 00:15:20.944 }, 00:15:20.944 "auth": { 00:15:20.944 "state": "completed", 00:15:20.944 "digest": "sha256", 00:15:20.944 "dhgroup": "ffdhe3072" 00:15:20.944 } 00:15:20.944 } 00:15:20.944 ]' 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.944 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.203 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:21.203 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.771 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:22.030 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:22.030 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.030 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.030 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.030 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.031 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.290 00:15:22.290 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.290 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.290 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.549 { 00:15:22.549 "cntlid": 21, 00:15:22.549 "qid": 0, 00:15:22.549 "state": "enabled", 00:15:22.549 "thread": "nvmf_tgt_poll_group_000", 00:15:22.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.549 "listen_address": { 00:15:22.549 "trtype": "TCP", 00:15:22.549 "adrfam": "IPv4", 00:15:22.549 "traddr": "10.0.0.2", 00:15:22.549 "trsvcid": "4420" 00:15:22.549 }, 00:15:22.549 "peer_address": { 00:15:22.549 "trtype": "TCP", 00:15:22.549 "adrfam": "IPv4", 00:15:22.549 "traddr": "10.0.0.1", 00:15:22.549 "trsvcid": "59370" 00:15:22.549 }, 00:15:22.549 "auth": { 00:15:22.549 "state": "completed", 00:15:22.549 "digest": "sha256", 00:15:22.549 "dhgroup": "ffdhe3072" 00:15:22.549 } 00:15:22.549 } 00:15:22.549 ]' 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.549 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.808 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:22.808 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.376 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.377 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.636 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.895 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.895 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.895 { 00:15:23.895 "cntlid": 23, 00:15:23.895 "qid": 0, 00:15:23.895 "state": "enabled", 00:15:23.895 "thread": "nvmf_tgt_poll_group_000", 00:15:23.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.895 "listen_address": { 00:15:23.895 "trtype": "TCP", 00:15:23.895 "adrfam": "IPv4", 00:15:23.895 "traddr": "10.0.0.2", 00:15:23.895 "trsvcid": "4420" 00:15:23.895 }, 00:15:23.895 "peer_address": { 00:15:23.896 "trtype": "TCP", 00:15:23.896 "adrfam": "IPv4", 00:15:23.896 "traddr": "10.0.0.1", 00:15:23.896 "trsvcid": "59394" 00:15:23.896 }, 00:15:23.896 "auth": { 00:15:23.896 "state": "completed", 00:15:23.896 "digest": "sha256", 00:15:23.896 "dhgroup": "ffdhe3072" 00:15:23.896 } 00:15:23.896 } 00:15:23.896 ]' 00:15:23.896 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.155 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.414 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:24.414 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.983 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.242 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.501 00:15:25.501 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.501 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.501 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.501 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.502 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.502 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.502 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.762 { 00:15:25.762 "cntlid": 25, 00:15:25.762 "qid": 0, 00:15:25.762 "state": "enabled", 00:15:25.762 "thread": "nvmf_tgt_poll_group_000", 00:15:25.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.762 "listen_address": { 00:15:25.762 "trtype": "TCP", 00:15:25.762 "adrfam": "IPv4", 00:15:25.762 "traddr": "10.0.0.2", 00:15:25.762 "trsvcid": "4420" 00:15:25.762 }, 00:15:25.762 "peer_address": { 00:15:25.762 "trtype": "TCP", 00:15:25.762 "adrfam": "IPv4", 00:15:25.762 "traddr": "10.0.0.1", 00:15:25.762 "trsvcid": "59410" 00:15:25.762 }, 00:15:25.762 "auth": { 00:15:25.762 "state": "completed", 00:15:25.762 "digest": "sha256", 00:15:25.762 "dhgroup": "ffdhe4096" 00:15:25.762 } 00:15:25.762 } 00:15:25.762 ]' 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.762 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.021 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:26.021 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.589 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.849 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.108 00:15:27.108 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.108 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.108 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.108 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.109 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.109 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.109 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.368 { 00:15:27.368 "cntlid": 27, 00:15:27.368 "qid": 0, 00:15:27.368 "state": "enabled", 00:15:27.368 "thread": "nvmf_tgt_poll_group_000", 00:15:27.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.368 "listen_address": { 00:15:27.368 "trtype": "TCP", 00:15:27.368 "adrfam": "IPv4", 00:15:27.368 "traddr": "10.0.0.2", 00:15:27.368 "trsvcid": "4420" 00:15:27.368 }, 00:15:27.368 "peer_address": { 00:15:27.368 "trtype": "TCP", 00:15:27.368 "adrfam": "IPv4", 00:15:27.368 "traddr": "10.0.0.1", 00:15:27.368 "trsvcid": "59442" 00:15:27.368 }, 00:15:27.368 "auth": { 00:15:27.368 "state": "completed", 00:15:27.368 "digest": "sha256", 00:15:27.368 "dhgroup": "ffdhe4096" 00:15:27.368 } 00:15:27.368 } 00:15:27.368 ]' 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.368 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.628 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:27.628 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.196 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.455 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.714 00:15:28.714 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.714 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.714 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.973 { 00:15:28.973 "cntlid": 29, 00:15:28.973 "qid": 0, 00:15:28.973 "state": "enabled", 00:15:28.973 "thread": "nvmf_tgt_poll_group_000", 00:15:28.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.973 "listen_address": { 00:15:28.973 "trtype": "TCP", 00:15:28.973 "adrfam": "IPv4", 00:15:28.973 "traddr": "10.0.0.2", 00:15:28.973 "trsvcid": "4420" 00:15:28.973 }, 00:15:28.973 "peer_address": { 00:15:28.973 "trtype": "TCP", 00:15:28.973 "adrfam": "IPv4", 00:15:28.973 "traddr": "10.0.0.1", 00:15:28.973 "trsvcid": "59472" 00:15:28.973 }, 00:15:28.973 "auth": { 00:15:28.973 "state": "completed", 00:15:28.973 "digest": "sha256", 00:15:28.973 "dhgroup": "ffdhe4096" 00:15:28.973 } 00:15:28.973 } 00:15:28.973 ]' 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.973 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.231 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:29.231 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:29.800 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.059 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.318 00:15:30.318 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.318 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.318 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.578 { 00:15:30.578 "cntlid": 31, 00:15:30.578 "qid": 0, 00:15:30.578 "state": "enabled", 00:15:30.578 "thread": "nvmf_tgt_poll_group_000", 00:15:30.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.578 "listen_address": { 00:15:30.578 "trtype": "TCP", 00:15:30.578 "adrfam": "IPv4", 00:15:30.578 "traddr": "10.0.0.2", 00:15:30.578 "trsvcid": "4420" 00:15:30.578 }, 00:15:30.578 "peer_address": { 00:15:30.578 "trtype": "TCP", 00:15:30.578 "adrfam": "IPv4", 00:15:30.578 "traddr": "10.0.0.1", 00:15:30.578 "trsvcid": "60242" 00:15:30.578 }, 00:15:30.578 "auth": { 00:15:30.578 "state": "completed", 00:15:30.578 "digest": "sha256", 00:15:30.578 "dhgroup": "ffdhe4096" 00:15:30.578 } 00:15:30.578 } 00:15:30.578 ]' 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.578 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.837 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:30.837 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.404 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.662 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.922 00:15:31.922 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.922 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.922 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.181 { 00:15:32.181 "cntlid": 33, 00:15:32.181 "qid": 0, 00:15:32.181 "state": "enabled", 00:15:32.181 "thread": "nvmf_tgt_poll_group_000", 00:15:32.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.181 "listen_address": { 00:15:32.181 "trtype": "TCP", 00:15:32.181 "adrfam": "IPv4", 00:15:32.181 "traddr": "10.0.0.2", 00:15:32.181 "trsvcid": "4420" 00:15:32.181 }, 00:15:32.181 "peer_address": { 00:15:32.181 "trtype": "TCP", 00:15:32.181 "adrfam": "IPv4", 00:15:32.181 "traddr": "10.0.0.1", 00:15:32.181 "trsvcid": "60262" 00:15:32.181 }, 00:15:32.181 "auth": { 00:15:32.181 "state": "completed", 00:15:32.181 "digest": "sha256", 00:15:32.181 "dhgroup": "ffdhe6144" 00:15:32.181 } 00:15:32.181 } 00:15:32.181 ]' 00:15:32.181 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.181 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.181 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.181 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.181 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.440 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.440 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.440 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.440 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:32.440 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.008 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.267 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.836 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.836 { 00:15:33.836 "cntlid": 35, 00:15:33.836 "qid": 0, 00:15:33.836 "state": "enabled", 00:15:33.836 "thread": "nvmf_tgt_poll_group_000", 00:15:33.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.836 "listen_address": { 00:15:33.836 "trtype": "TCP", 00:15:33.836 "adrfam": "IPv4", 00:15:33.836 "traddr": "10.0.0.2", 00:15:33.836 "trsvcid": "4420" 00:15:33.836 }, 00:15:33.836 "peer_address": { 00:15:33.836 "trtype": "TCP", 00:15:33.836 "adrfam": "IPv4", 00:15:33.836 "traddr": "10.0.0.1", 00:15:33.836 "trsvcid": "60292" 00:15:33.836 }, 00:15:33.836 "auth": { 00:15:33.836 "state": "completed", 00:15:33.836 "digest": "sha256", 00:15:33.836 "dhgroup": "ffdhe6144" 00:15:33.836 } 00:15:33.836 } 00:15:33.836 ]' 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.836 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.095 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.095 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.095 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.095 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.095 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.355 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:34.355 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.924 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.183 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.442 00:15:35.442 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.442 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.442 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.701 { 00:15:35.701 "cntlid": 37, 00:15:35.701 "qid": 0, 00:15:35.701 "state": "enabled", 00:15:35.701 "thread": "nvmf_tgt_poll_group_000", 00:15:35.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.701 "listen_address": { 00:15:35.701 "trtype": "TCP", 00:15:35.701 "adrfam": "IPv4", 00:15:35.701 "traddr": "10.0.0.2", 00:15:35.701 "trsvcid": "4420" 00:15:35.701 }, 00:15:35.701 "peer_address": { 00:15:35.701 "trtype": "TCP", 00:15:35.701 "adrfam": "IPv4", 00:15:35.701 "traddr": "10.0.0.1", 00:15:35.701 "trsvcid": "60310" 00:15:35.701 }, 00:15:35.701 "auth": { 00:15:35.701 "state": "completed", 00:15:35.701 "digest": "sha256", 00:15:35.701 "dhgroup": "ffdhe6144" 00:15:35.701 } 00:15:35.701 } 00:15:35.701 ]' 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.701 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.959 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:35.959 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.526 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.786 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.076 00:15:37.076 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.076 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.076 15:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.410 { 00:15:37.410 "cntlid": 39, 00:15:37.410 "qid": 0, 00:15:37.410 "state": "enabled", 00:15:37.410 "thread": "nvmf_tgt_poll_group_000", 00:15:37.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.410 "listen_address": { 00:15:37.410 "trtype": "TCP", 00:15:37.410 "adrfam": "IPv4", 00:15:37.410 "traddr": "10.0.0.2", 00:15:37.410 "trsvcid": "4420" 00:15:37.410 }, 00:15:37.410 "peer_address": { 00:15:37.410 "trtype": "TCP", 00:15:37.410 "adrfam": "IPv4", 00:15:37.410 "traddr": "10.0.0.1", 00:15:37.410 "trsvcid": "60346" 00:15:37.410 }, 00:15:37.410 "auth": { 00:15:37.410 "state": "completed", 00:15:37.410 "digest": "sha256", 00:15:37.410 "dhgroup": "ffdhe6144" 00:15:37.410 } 00:15:37.410 } 00:15:37.410 ]' 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.410 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.689 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:37.689 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.257 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.516 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.084 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.084 { 00:15:39.084 "cntlid": 41, 00:15:39.084 "qid": 0, 00:15:39.084 "state": "enabled", 00:15:39.084 "thread": "nvmf_tgt_poll_group_000", 00:15:39.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.084 "listen_address": { 00:15:39.084 "trtype": "TCP", 00:15:39.084 "adrfam": "IPv4", 00:15:39.084 "traddr": "10.0.0.2", 00:15:39.084 "trsvcid": "4420" 00:15:39.084 }, 00:15:39.084 "peer_address": { 00:15:39.084 "trtype": "TCP", 00:15:39.084 "adrfam": "IPv4", 00:15:39.084 "traddr": "10.0.0.1", 00:15:39.084 "trsvcid": "60378" 00:15:39.084 }, 00:15:39.084 "auth": { 00:15:39.084 "state": "completed", 00:15:39.084 "digest": "sha256", 00:15:39.084 "dhgroup": "ffdhe8192" 00:15:39.084 } 00:15:39.084 } 00:15:39.084 ]' 00:15:39.084 15:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.344 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.602 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:39.603 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.170 15:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.429 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.688 00:15:40.688 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.688 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.688 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.946 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.946 { 00:15:40.946 "cntlid": 43, 00:15:40.946 "qid": 0, 00:15:40.946 "state": "enabled", 00:15:40.946 "thread": "nvmf_tgt_poll_group_000", 00:15:40.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.946 "listen_address": { 00:15:40.946 "trtype": "TCP", 00:15:40.946 "adrfam": "IPv4", 00:15:40.946 "traddr": "10.0.0.2", 00:15:40.946 "trsvcid": "4420" 00:15:40.946 }, 00:15:40.946 "peer_address": { 00:15:40.947 "trtype": "TCP", 00:15:40.947 "adrfam": "IPv4", 00:15:40.947 "traddr": "10.0.0.1", 00:15:40.947 "trsvcid": "51630" 00:15:40.947 }, 00:15:40.947 "auth": { 00:15:40.947 "state": "completed", 00:15:40.947 "digest": "sha256", 00:15:40.947 "dhgroup": "ffdhe8192" 00:15:40.947 } 00:15:40.947 } 00:15:40.947 ]' 00:15:40.947 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.947 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.947 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.205 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.205 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.205 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.206 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.206 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.464 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:41.464 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.032 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.033 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.600 00:15:42.600 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.600 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.600 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.860 { 00:15:42.860 "cntlid": 45, 00:15:42.860 "qid": 0, 00:15:42.860 "state": "enabled", 00:15:42.860 "thread": "nvmf_tgt_poll_group_000", 00:15:42.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.860 "listen_address": { 00:15:42.860 "trtype": "TCP", 00:15:42.860 "adrfam": "IPv4", 00:15:42.860 "traddr": "10.0.0.2", 00:15:42.860 "trsvcid": "4420" 00:15:42.860 }, 00:15:42.860 "peer_address": { 00:15:42.860 "trtype": "TCP", 00:15:42.860 "adrfam": "IPv4", 00:15:42.860 "traddr": "10.0.0.1", 00:15:42.860 "trsvcid": "51648" 00:15:42.860 }, 00:15:42.860 "auth": { 00:15:42.860 "state": "completed", 00:15:42.860 "digest": "sha256", 00:15:42.860 "dhgroup": "ffdhe8192" 00:15:42.860 } 00:15:42.860 } 00:15:42.860 ]' 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.860 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.119 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:43.119 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:43.689 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.690 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.949 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.517 00:15:44.517 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.517 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.517 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.776 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.776 { 00:15:44.776 "cntlid": 47, 00:15:44.776 "qid": 0, 00:15:44.776 "state": "enabled", 00:15:44.776 "thread": "nvmf_tgt_poll_group_000", 00:15:44.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.776 "listen_address": { 00:15:44.776 "trtype": "TCP", 00:15:44.776 "adrfam": "IPv4", 00:15:44.776 "traddr": "10.0.0.2", 00:15:44.776 "trsvcid": "4420" 00:15:44.776 }, 00:15:44.776 "peer_address": { 00:15:44.776 "trtype": "TCP", 00:15:44.777 "adrfam": "IPv4", 00:15:44.777 "traddr": "10.0.0.1", 00:15:44.777 "trsvcid": "51672" 00:15:44.777 }, 00:15:44.777 "auth": { 00:15:44.777 "state": "completed", 00:15:44.777 "digest": "sha256", 00:15:44.777 "dhgroup": "ffdhe8192" 00:15:44.777 } 00:15:44.777 } 00:15:44.777 ]' 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.777 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.035 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:45.035 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.603 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.861 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.120 00:15:46.120 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.120 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.120 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.379 { 00:15:46.379 "cntlid": 49, 00:15:46.379 "qid": 0, 00:15:46.379 "state": "enabled", 00:15:46.379 "thread": "nvmf_tgt_poll_group_000", 00:15:46.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.379 "listen_address": { 00:15:46.379 "trtype": "TCP", 00:15:46.379 "adrfam": "IPv4", 00:15:46.379 "traddr": "10.0.0.2", 00:15:46.379 "trsvcid": "4420" 00:15:46.379 }, 00:15:46.379 "peer_address": { 00:15:46.379 "trtype": "TCP", 00:15:46.379 "adrfam": "IPv4", 00:15:46.379 "traddr": "10.0.0.1", 00:15:46.379 "trsvcid": "51692" 00:15:46.379 }, 00:15:46.379 "auth": { 00:15:46.379 "state": "completed", 00:15:46.379 "digest": "sha384", 00:15:46.379 "dhgroup": "null" 00:15:46.379 } 00:15:46.379 } 00:15:46.379 ]' 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.379 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.637 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:46.637 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.204 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.463 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.721 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.721 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.980 { 00:15:47.980 "cntlid": 51, 00:15:47.980 "qid": 0, 00:15:47.980 "state": "enabled", 00:15:47.980 "thread": "nvmf_tgt_poll_group_000", 00:15:47.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.980 "listen_address": { 00:15:47.980 "trtype": "TCP", 00:15:47.980 "adrfam": "IPv4", 00:15:47.980 "traddr": "10.0.0.2", 00:15:47.980 "trsvcid": "4420" 00:15:47.980 }, 00:15:47.980 "peer_address": { 00:15:47.980 "trtype": "TCP", 00:15:47.980 "adrfam": "IPv4", 00:15:47.980 "traddr": "10.0.0.1", 00:15:47.980 "trsvcid": "51726" 00:15:47.980 }, 00:15:47.980 "auth": { 00:15:47.980 "state": "completed", 00:15:47.980 "digest": "sha384", 00:15:47.980 "dhgroup": "null" 00:15:47.980 } 00:15:47.980 } 00:15:47.980 ]' 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.980 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.239 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:48.239 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.807 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.066 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.323 00:15:49.323 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.323 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.323 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.323 { 00:15:49.323 "cntlid": 53, 00:15:49.323 "qid": 0, 00:15:49.323 "state": "enabled", 00:15:49.323 "thread": "nvmf_tgt_poll_group_000", 00:15:49.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.323 "listen_address": { 00:15:49.323 "trtype": "TCP", 00:15:49.323 "adrfam": "IPv4", 00:15:49.323 "traddr": "10.0.0.2", 00:15:49.323 "trsvcid": "4420" 00:15:49.323 }, 00:15:49.323 "peer_address": { 00:15:49.323 "trtype": "TCP", 00:15:49.323 "adrfam": "IPv4", 00:15:49.323 "traddr": "10.0.0.1", 00:15:49.323 "trsvcid": "51752" 00:15:49.323 }, 00:15:49.323 "auth": { 00:15:49.323 "state": "completed", 00:15:49.323 "digest": "sha384", 00:15:49.323 "dhgroup": "null" 00:15:49.323 } 00:15:49.323 } 00:15:49.323 ]' 00:15:49.323 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.580 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.837 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:49.838 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.403 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.661 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.918 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.918 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.919 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.919 { 00:15:50.919 "cntlid": 55, 00:15:50.919 "qid": 0, 00:15:50.919 "state": "enabled", 00:15:50.919 "thread": "nvmf_tgt_poll_group_000", 00:15:50.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.919 "listen_address": { 00:15:50.919 "trtype": "TCP", 00:15:50.919 "adrfam": "IPv4", 00:15:50.919 "traddr": "10.0.0.2", 00:15:50.919 "trsvcid": "4420" 00:15:50.919 }, 00:15:50.919 "peer_address": { 00:15:50.919 "trtype": "TCP", 00:15:50.919 "adrfam": "IPv4", 00:15:50.919 "traddr": "10.0.0.1", 00:15:50.919 "trsvcid": "50618" 00:15:50.919 }, 00:15:50.919 "auth": { 00:15:50.919 "state": "completed", 00:15:50.919 "digest": "sha384", 00:15:50.919 "dhgroup": "null" 00:15:50.919 } 00:15:50.919 } 00:15:50.919 ]' 00:15:50.919 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.176 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.434 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:51.434 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.001 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.258 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.258 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.259 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.259 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.259 00:15:52.259 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.259 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.259 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.515 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.515 { 00:15:52.515 "cntlid": 57, 00:15:52.515 "qid": 0, 00:15:52.515 "state": "enabled", 00:15:52.515 "thread": "nvmf_tgt_poll_group_000", 00:15:52.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.515 "listen_address": { 00:15:52.516 "trtype": "TCP", 00:15:52.516 "adrfam": "IPv4", 00:15:52.516 "traddr": "10.0.0.2", 00:15:52.516 "trsvcid": "4420" 00:15:52.516 }, 00:15:52.516 "peer_address": { 00:15:52.516 "trtype": "TCP", 00:15:52.516 "adrfam": "IPv4", 00:15:52.516 "traddr": "10.0.0.1", 00:15:52.516 "trsvcid": "50638" 00:15:52.516 }, 00:15:52.516 "auth": { 00:15:52.516 "state": "completed", 00:15:52.516 "digest": "sha384", 00:15:52.516 "dhgroup": "ffdhe2048" 00:15:52.516 } 00:15:52.516 } 00:15:52.516 ]' 00:15:52.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.777 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.777 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.777 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.777 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.777 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.036 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:53.036 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.603 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.861 00:15:53.861 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.861 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.861 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.119 { 00:15:54.119 "cntlid": 59, 00:15:54.119 "qid": 0, 00:15:54.119 "state": "enabled", 00:15:54.119 "thread": "nvmf_tgt_poll_group_000", 00:15:54.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.119 "listen_address": { 00:15:54.119 "trtype": "TCP", 00:15:54.119 "adrfam": "IPv4", 00:15:54.119 "traddr": "10.0.0.2", 00:15:54.119 "trsvcid": "4420" 00:15:54.119 }, 00:15:54.119 "peer_address": { 00:15:54.119 "trtype": "TCP", 00:15:54.119 "adrfam": "IPv4", 00:15:54.119 "traddr": "10.0.0.1", 00:15:54.119 "trsvcid": "50662" 00:15:54.119 }, 00:15:54.119 "auth": { 00:15:54.119 "state": "completed", 00:15:54.119 "digest": "sha384", 00:15:54.119 "dhgroup": "ffdhe2048" 00:15:54.119 } 00:15:54.119 } 00:15:54.119 ]' 00:15:54.119 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.119 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.119 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.377 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.377 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.377 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.377 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.377 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.653 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:54.653 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.221 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.221 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.484 00:15:55.484 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.484 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.484 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.750 { 00:15:55.750 "cntlid": 61, 00:15:55.750 "qid": 0, 00:15:55.750 "state": "enabled", 00:15:55.750 "thread": "nvmf_tgt_poll_group_000", 00:15:55.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.750 "listen_address": { 00:15:55.750 "trtype": "TCP", 00:15:55.750 "adrfam": "IPv4", 00:15:55.750 "traddr": "10.0.0.2", 00:15:55.750 "trsvcid": "4420" 00:15:55.750 }, 00:15:55.750 "peer_address": { 00:15:55.750 "trtype": "TCP", 00:15:55.750 "adrfam": "IPv4", 00:15:55.750 "traddr": "10.0.0.1", 00:15:55.750 "trsvcid": "50706" 00:15:55.750 }, 00:15:55.750 "auth": { 00:15:55.750 "state": "completed", 00:15:55.750 "digest": "sha384", 00:15:55.750 "dhgroup": "ffdhe2048" 00:15:55.750 } 00:15:55.750 } 00:15:55.750 ]' 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.750 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.008 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.008 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.008 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.008 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:56.008 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.572 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.830 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.090 00:15:57.090 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.090 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.090 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.349 { 00:15:57.349 "cntlid": 63, 00:15:57.349 "qid": 0, 00:15:57.349 "state": "enabled", 00:15:57.349 "thread": "nvmf_tgt_poll_group_000", 00:15:57.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.349 "listen_address": { 00:15:57.349 "trtype": "TCP", 00:15:57.349 "adrfam": "IPv4", 00:15:57.349 "traddr": "10.0.0.2", 00:15:57.349 "trsvcid": "4420" 00:15:57.349 }, 00:15:57.349 "peer_address": { 00:15:57.349 "trtype": "TCP", 00:15:57.349 "adrfam": "IPv4", 00:15:57.349 "traddr": "10.0.0.1", 00:15:57.349 "trsvcid": "50742" 00:15:57.349 }, 00:15:57.349 "auth": { 00:15:57.349 "state": "completed", 00:15:57.349 "digest": "sha384", 00:15:57.349 "dhgroup": "ffdhe2048" 00:15:57.349 } 00:15:57.349 } 00:15:57.349 ]' 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.349 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.608 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.608 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.608 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.608 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:57.608 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.176 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.435 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.436 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.695 00:15:58.695 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.695 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.695 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.963 { 00:15:58.963 "cntlid": 65, 00:15:58.963 "qid": 0, 00:15:58.963 "state": "enabled", 00:15:58.963 "thread": "nvmf_tgt_poll_group_000", 00:15:58.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.963 "listen_address": { 00:15:58.963 "trtype": "TCP", 00:15:58.963 "adrfam": "IPv4", 00:15:58.963 "traddr": "10.0.0.2", 00:15:58.963 "trsvcid": "4420" 00:15:58.963 }, 00:15:58.963 "peer_address": { 00:15:58.963 "trtype": "TCP", 00:15:58.963 "adrfam": "IPv4", 00:15:58.963 "traddr": "10.0.0.1", 00:15:58.963 "trsvcid": "50776" 00:15:58.963 }, 00:15:58.963 "auth": { 00:15:58.963 "state": "completed", 00:15:58.963 "digest": "sha384", 00:15:58.963 "dhgroup": "ffdhe3072" 00:15:58.963 } 00:15:58.963 } 00:15:58.963 ]' 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.963 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.223 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:59.223 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.790 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.050 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.308 00:16:00.308 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.308 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.308 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.569 { 00:16:00.569 "cntlid": 67, 00:16:00.569 "qid": 0, 00:16:00.569 "state": "enabled", 00:16:00.569 "thread": "nvmf_tgt_poll_group_000", 00:16:00.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.569 "listen_address": { 00:16:00.569 "trtype": "TCP", 00:16:00.569 "adrfam": "IPv4", 00:16:00.569 "traddr": "10.0.0.2", 00:16:00.569 "trsvcid": "4420" 00:16:00.569 }, 00:16:00.569 "peer_address": { 00:16:00.569 "trtype": "TCP", 00:16:00.569 "adrfam": "IPv4", 00:16:00.569 "traddr": "10.0.0.1", 00:16:00.569 "trsvcid": "56898" 00:16:00.569 }, 00:16:00.569 "auth": { 00:16:00.569 "state": "completed", 00:16:00.569 "digest": "sha384", 00:16:00.569 "dhgroup": "ffdhe3072" 00:16:00.569 } 00:16:00.569 } 00:16:00.569 ]' 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.569 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.828 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:00.828 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.397 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.656 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.915 00:16:01.915 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.915 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.915 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.174 { 00:16:02.174 "cntlid": 69, 00:16:02.174 "qid": 0, 00:16:02.174 "state": "enabled", 00:16:02.174 "thread": "nvmf_tgt_poll_group_000", 00:16:02.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.174 "listen_address": { 00:16:02.174 "trtype": "TCP", 00:16:02.174 "adrfam": "IPv4", 00:16:02.174 "traddr": "10.0.0.2", 00:16:02.174 "trsvcid": "4420" 00:16:02.174 }, 00:16:02.174 "peer_address": { 00:16:02.174 "trtype": "TCP", 00:16:02.174 "adrfam": "IPv4", 00:16:02.174 "traddr": "10.0.0.1", 00:16:02.174 "trsvcid": "56922" 00:16:02.174 }, 00:16:02.174 "auth": { 00:16:02.174 "state": "completed", 00:16:02.174 "digest": "sha384", 00:16:02.174 "dhgroup": "ffdhe3072" 00:16:02.174 } 00:16:02.174 } 00:16:02.174 ]' 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.174 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.174 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.174 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.174 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.433 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:02.433 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.001 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.261 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.520 00:16:03.520 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.520 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.520 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.780 { 00:16:03.780 "cntlid": 71, 00:16:03.780 "qid": 0, 00:16:03.780 "state": "enabled", 00:16:03.780 "thread": "nvmf_tgt_poll_group_000", 00:16:03.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.780 "listen_address": { 00:16:03.780 "trtype": "TCP", 00:16:03.780 "adrfam": "IPv4", 00:16:03.780 "traddr": "10.0.0.2", 00:16:03.780 "trsvcid": "4420" 00:16:03.780 }, 00:16:03.780 "peer_address": { 00:16:03.780 "trtype": "TCP", 00:16:03.780 "adrfam": "IPv4", 00:16:03.780 "traddr": "10.0.0.1", 00:16:03.780 "trsvcid": "56936" 00:16:03.780 }, 00:16:03.780 "auth": { 00:16:03.780 "state": "completed", 00:16:03.780 "digest": "sha384", 00:16:03.780 "dhgroup": "ffdhe3072" 00:16:03.780 } 00:16:03.780 } 00:16:03.780 ]' 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.780 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.039 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:04.039 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.607 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.866 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.867 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.125 00:16:05.125 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.125 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.125 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.385 { 00:16:05.385 "cntlid": 73, 00:16:05.385 "qid": 0, 00:16:05.385 "state": "enabled", 00:16:05.385 "thread": "nvmf_tgt_poll_group_000", 00:16:05.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.385 "listen_address": { 00:16:05.385 "trtype": "TCP", 00:16:05.385 "adrfam": "IPv4", 00:16:05.385 "traddr": "10.0.0.2", 00:16:05.385 "trsvcid": "4420" 00:16:05.385 }, 00:16:05.385 "peer_address": { 00:16:05.385 "trtype": "TCP", 00:16:05.385 "adrfam": "IPv4", 00:16:05.385 "traddr": "10.0.0.1", 00:16:05.385 "trsvcid": "56946" 00:16:05.385 }, 00:16:05.385 "auth": { 00:16:05.385 "state": "completed", 00:16:05.385 "digest": "sha384", 00:16:05.385 "dhgroup": "ffdhe4096" 00:16:05.385 } 00:16:05.385 } 00:16:05.385 ]' 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.385 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.644 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:05.644 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.211 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.471 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.730 00:16:06.730 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.730 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.730 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.990 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.990 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.990 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.990 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.991 { 00:16:06.991 "cntlid": 75, 00:16:06.991 "qid": 0, 00:16:06.991 "state": "enabled", 00:16:06.991 "thread": "nvmf_tgt_poll_group_000", 00:16:06.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.991 "listen_address": { 00:16:06.991 "trtype": "TCP", 00:16:06.991 "adrfam": "IPv4", 00:16:06.991 "traddr": "10.0.0.2", 00:16:06.991 "trsvcid": "4420" 00:16:06.991 }, 00:16:06.991 "peer_address": { 00:16:06.991 "trtype": "TCP", 00:16:06.991 "adrfam": "IPv4", 00:16:06.991 "traddr": "10.0.0.1", 00:16:06.991 "trsvcid": "56954" 00:16:06.991 }, 00:16:06.991 "auth": { 00:16:06.991 "state": "completed", 00:16:06.991 "digest": "sha384", 00:16:06.991 "dhgroup": "ffdhe4096" 00:16:06.991 } 00:16:06.991 } 00:16:06.991 ]' 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.991 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.250 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.250 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.250 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.250 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:07.250 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.819 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.078 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.337 00:16:08.337 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.337 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.337 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.596 { 00:16:08.596 "cntlid": 77, 00:16:08.596 "qid": 0, 00:16:08.596 "state": "enabled", 00:16:08.596 "thread": "nvmf_tgt_poll_group_000", 00:16:08.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.596 "listen_address": { 00:16:08.596 "trtype": "TCP", 00:16:08.596 "adrfam": "IPv4", 00:16:08.596 "traddr": "10.0.0.2", 00:16:08.596 "trsvcid": "4420" 00:16:08.596 }, 00:16:08.596 "peer_address": { 00:16:08.596 "trtype": "TCP", 00:16:08.596 "adrfam": "IPv4", 00:16:08.596 "traddr": "10.0.0.1", 00:16:08.596 "trsvcid": "56974" 00:16:08.596 }, 00:16:08.596 "auth": { 00:16:08.596 "state": "completed", 00:16:08.596 "digest": "sha384", 00:16:08.596 "dhgroup": "ffdhe4096" 00:16:08.596 } 00:16:08.596 } 00:16:08.596 ]' 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.596 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.855 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.855 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.855 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.855 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.855 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.114 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:09.114 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:09.682 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.683 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.942 00:16:09.942 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.201 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.201 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.201 { 00:16:10.201 "cntlid": 79, 00:16:10.201 "qid": 0, 00:16:10.201 "state": "enabled", 00:16:10.201 "thread": "nvmf_tgt_poll_group_000", 00:16:10.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.201 "listen_address": { 00:16:10.201 "trtype": "TCP", 00:16:10.201 "adrfam": "IPv4", 00:16:10.201 "traddr": "10.0.0.2", 00:16:10.201 "trsvcid": "4420" 00:16:10.201 }, 00:16:10.201 "peer_address": { 00:16:10.201 "trtype": "TCP", 00:16:10.201 "adrfam": "IPv4", 00:16:10.201 "traddr": "10.0.0.1", 00:16:10.201 "trsvcid": "33722" 00:16:10.201 }, 00:16:10.201 "auth": { 00:16:10.201 "state": "completed", 00:16:10.201 "digest": "sha384", 00:16:10.201 "dhgroup": "ffdhe4096" 00:16:10.201 } 00:16:10.201 } 00:16:10.201 ]' 00:16:10.201 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.482 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.741 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:10.741 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.309 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.309 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.877 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.877 { 00:16:11.877 "cntlid": 81, 00:16:11.877 "qid": 0, 00:16:11.877 "state": "enabled", 00:16:11.877 "thread": "nvmf_tgt_poll_group_000", 00:16:11.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.877 "listen_address": { 00:16:11.877 "trtype": "TCP", 00:16:11.877 "adrfam": "IPv4", 00:16:11.877 "traddr": "10.0.0.2", 00:16:11.877 "trsvcid": "4420" 00:16:11.877 }, 00:16:11.877 "peer_address": { 00:16:11.877 "trtype": "TCP", 00:16:11.877 "adrfam": "IPv4", 00:16:11.877 "traddr": "10.0.0.1", 00:16:11.877 "trsvcid": "33746" 00:16:11.877 }, 00:16:11.877 "auth": { 00:16:11.877 "state": "completed", 00:16:11.877 "digest": "sha384", 00:16:11.877 "dhgroup": "ffdhe6144" 00:16:11.877 } 00:16:11.877 } 00:16:11.877 ]' 00:16:11.877 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.136 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.395 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:12.395 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.009 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.313 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.610 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.610 { 00:16:13.610 "cntlid": 83, 00:16:13.610 "qid": 0, 00:16:13.610 "state": "enabled", 00:16:13.610 "thread": "nvmf_tgt_poll_group_000", 00:16:13.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.610 "listen_address": { 00:16:13.610 "trtype": "TCP", 00:16:13.610 "adrfam": "IPv4", 00:16:13.610 "traddr": "10.0.0.2", 00:16:13.610 "trsvcid": "4420" 00:16:13.610 }, 00:16:13.610 "peer_address": { 00:16:13.610 "trtype": "TCP", 00:16:13.610 "adrfam": "IPv4", 00:16:13.610 "traddr": "10.0.0.1", 00:16:13.610 "trsvcid": "33768" 00:16:13.610 }, 00:16:13.610 "auth": { 00:16:13.610 "state": "completed", 00:16:13.610 "digest": "sha384", 00:16:13.610 "dhgroup": "ffdhe6144" 00:16:13.610 } 00:16:13.610 } 00:16:13.610 ]' 00:16:13.610 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.868 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.868 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.869 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.869 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.869 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.869 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.869 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.127 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:14.127 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.694 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.954 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.954 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.212 00:16:15.212 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.212 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.212 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.472 { 00:16:15.472 "cntlid": 85, 00:16:15.472 "qid": 0, 00:16:15.472 "state": "enabled", 00:16:15.472 "thread": "nvmf_tgt_poll_group_000", 00:16:15.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.472 "listen_address": { 00:16:15.472 "trtype": "TCP", 00:16:15.472 "adrfam": "IPv4", 00:16:15.472 "traddr": "10.0.0.2", 00:16:15.472 "trsvcid": "4420" 00:16:15.472 }, 00:16:15.472 "peer_address": { 00:16:15.472 "trtype": "TCP", 00:16:15.472 "adrfam": "IPv4", 00:16:15.472 "traddr": "10.0.0.1", 00:16:15.472 "trsvcid": "33792" 00:16:15.472 }, 00:16:15.472 "auth": { 00:16:15.472 "state": "completed", 00:16:15.472 "digest": "sha384", 00:16:15.472 "dhgroup": "ffdhe6144" 00:16:15.472 } 00:16:15.472 } 00:16:15.472 ]' 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.472 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.731 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:15.731 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.299 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.558 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.816 00:16:16.816 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.816 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.816 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.075 { 00:16:17.075 "cntlid": 87, 00:16:17.075 "qid": 0, 00:16:17.075 "state": "enabled", 00:16:17.075 "thread": "nvmf_tgt_poll_group_000", 00:16:17.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.075 "listen_address": { 00:16:17.075 "trtype": "TCP", 00:16:17.075 "adrfam": "IPv4", 00:16:17.075 "traddr": "10.0.0.2", 00:16:17.075 "trsvcid": "4420" 00:16:17.075 }, 00:16:17.075 "peer_address": { 00:16:17.075 "trtype": "TCP", 00:16:17.075 "adrfam": "IPv4", 00:16:17.075 "traddr": "10.0.0.1", 00:16:17.075 "trsvcid": "33822" 00:16:17.075 }, 00:16:17.075 "auth": { 00:16:17.075 "state": "completed", 00:16:17.075 "digest": "sha384", 00:16:17.075 "dhgroup": "ffdhe6144" 00:16:17.075 } 00:16:17.075 } 00:16:17.075 ]' 00:16:17.075 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.076 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.076 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.076 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.076 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.335 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.335 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.335 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.335 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:17.335 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.903 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.904 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.162 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.728 00:16:18.728 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.728 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.728 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.987 { 00:16:18.987 "cntlid": 89, 00:16:18.987 "qid": 0, 00:16:18.987 "state": "enabled", 00:16:18.987 "thread": "nvmf_tgt_poll_group_000", 00:16:18.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.987 "listen_address": { 00:16:18.987 "trtype": "TCP", 00:16:18.987 "adrfam": "IPv4", 00:16:18.987 "traddr": "10.0.0.2", 00:16:18.987 "trsvcid": "4420" 00:16:18.987 }, 00:16:18.987 "peer_address": { 00:16:18.987 "trtype": "TCP", 00:16:18.987 "adrfam": "IPv4", 00:16:18.987 "traddr": "10.0.0.1", 00:16:18.987 "trsvcid": "33858" 00:16:18.987 }, 00:16:18.987 "auth": { 00:16:18.987 "state": "completed", 00:16:18.987 "digest": "sha384", 00:16:18.987 "dhgroup": "ffdhe8192" 00:16:18.987 } 00:16:18.987 } 00:16:18.987 ]' 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.987 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.245 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:19.245 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.812 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.071 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.640 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.640 { 00:16:20.640 "cntlid": 91, 00:16:20.640 "qid": 0, 00:16:20.640 "state": "enabled", 00:16:20.640 "thread": "nvmf_tgt_poll_group_000", 00:16:20.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.640 "listen_address": { 00:16:20.640 "trtype": "TCP", 00:16:20.640 "adrfam": "IPv4", 00:16:20.640 "traddr": "10.0.0.2", 00:16:20.640 "trsvcid": "4420" 00:16:20.640 }, 00:16:20.640 "peer_address": { 00:16:20.640 "trtype": "TCP", 00:16:20.640 "adrfam": "IPv4", 00:16:20.640 "traddr": "10.0.0.1", 00:16:20.640 "trsvcid": "50068" 00:16:20.640 }, 00:16:20.640 "auth": { 00:16:20.640 "state": "completed", 00:16:20.640 "digest": "sha384", 00:16:20.640 "dhgroup": "ffdhe8192" 00:16:20.640 } 00:16:20.640 } 00:16:20.640 ]' 00:16:20.640 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.899 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.158 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:21.158 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:21.725 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.984 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.984 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.984 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.985 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.985 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.985 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.985 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.985 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.244 00:16:22.244 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.244 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.244 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.503 { 00:16:22.503 "cntlid": 93, 00:16:22.503 "qid": 0, 00:16:22.503 "state": "enabled", 00:16:22.503 "thread": "nvmf_tgt_poll_group_000", 00:16:22.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.503 "listen_address": { 00:16:22.503 "trtype": "TCP", 00:16:22.503 "adrfam": "IPv4", 00:16:22.503 "traddr": "10.0.0.2", 00:16:22.503 "trsvcid": "4420" 00:16:22.503 }, 00:16:22.503 "peer_address": { 00:16:22.503 "trtype": "TCP", 00:16:22.503 "adrfam": "IPv4", 00:16:22.503 "traddr": "10.0.0.1", 00:16:22.503 "trsvcid": "50108" 00:16:22.503 }, 00:16:22.503 "auth": { 00:16:22.503 "state": "completed", 00:16:22.503 "digest": "sha384", 00:16:22.503 "dhgroup": "ffdhe8192" 00:16:22.503 } 00:16:22.503 } 00:16:22.503 ]' 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.503 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.762 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.762 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.762 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.762 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.762 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.021 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:23.021 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.590 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.159 00:16:24.159 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.159 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.159 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.418 { 00:16:24.418 "cntlid": 95, 00:16:24.418 "qid": 0, 00:16:24.418 "state": "enabled", 00:16:24.418 "thread": "nvmf_tgt_poll_group_000", 00:16:24.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.418 "listen_address": { 00:16:24.418 "trtype": "TCP", 00:16:24.418 "adrfam": "IPv4", 00:16:24.418 "traddr": "10.0.0.2", 00:16:24.418 "trsvcid": "4420" 00:16:24.418 }, 00:16:24.418 "peer_address": { 00:16:24.418 "trtype": "TCP", 00:16:24.418 "adrfam": "IPv4", 00:16:24.418 "traddr": "10.0.0.1", 00:16:24.418 "trsvcid": "50136" 00:16:24.418 }, 00:16:24.418 "auth": { 00:16:24.418 "state": "completed", 00:16:24.418 "digest": "sha384", 00:16:24.418 "dhgroup": "ffdhe8192" 00:16:24.418 } 00:16:24.418 } 00:16:24.418 ]' 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.418 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.678 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:24.678 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.245 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.503 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.761 00:16:25.761 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.761 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.761 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.042 { 00:16:26.042 "cntlid": 97, 00:16:26.042 "qid": 0, 00:16:26.042 "state": "enabled", 00:16:26.042 "thread": "nvmf_tgt_poll_group_000", 00:16:26.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.042 "listen_address": { 00:16:26.042 "trtype": "TCP", 00:16:26.042 "adrfam": "IPv4", 00:16:26.042 "traddr": "10.0.0.2", 00:16:26.042 "trsvcid": "4420" 00:16:26.042 }, 00:16:26.042 "peer_address": { 00:16:26.042 "trtype": "TCP", 00:16:26.042 "adrfam": "IPv4", 00:16:26.042 "traddr": "10.0.0.1", 00:16:26.042 "trsvcid": "50160" 00:16:26.042 }, 00:16:26.042 "auth": { 00:16:26.042 "state": "completed", 00:16:26.042 "digest": "sha512", 00:16:26.042 "dhgroup": "null" 00:16:26.042 } 00:16:26.042 } 00:16:26.042 ]' 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.042 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.301 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:26.301 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.869 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.128 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.129 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.388 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.388 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.648 { 00:16:27.648 "cntlid": 99, 00:16:27.648 "qid": 0, 00:16:27.648 "state": "enabled", 00:16:27.648 "thread": "nvmf_tgt_poll_group_000", 00:16:27.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.648 "listen_address": { 00:16:27.648 "trtype": "TCP", 00:16:27.648 "adrfam": "IPv4", 00:16:27.648 "traddr": "10.0.0.2", 00:16:27.648 "trsvcid": "4420" 00:16:27.648 }, 00:16:27.648 "peer_address": { 00:16:27.648 "trtype": "TCP", 00:16:27.648 "adrfam": "IPv4", 00:16:27.648 "traddr": "10.0.0.1", 00:16:27.648 "trsvcid": "50188" 00:16:27.648 }, 00:16:27.648 "auth": { 00:16:27.648 "state": "completed", 00:16:27.648 "digest": "sha512", 00:16:27.648 "dhgroup": "null" 00:16:27.648 } 00:16:27.648 } 00:16:27.648 ]' 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.648 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.907 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:27.907 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.476 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.735 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.993 00:16:28.993 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.993 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.993 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.252 { 00:16:29.252 "cntlid": 101, 00:16:29.252 "qid": 0, 00:16:29.252 "state": "enabled", 00:16:29.252 "thread": "nvmf_tgt_poll_group_000", 00:16:29.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.252 "listen_address": { 00:16:29.252 "trtype": "TCP", 00:16:29.252 "adrfam": "IPv4", 00:16:29.252 "traddr": "10.0.0.2", 00:16:29.252 "trsvcid": "4420" 00:16:29.252 }, 00:16:29.252 "peer_address": { 00:16:29.252 "trtype": "TCP", 00:16:29.252 "adrfam": "IPv4", 00:16:29.252 "traddr": "10.0.0.1", 00:16:29.252 "trsvcid": "50212" 00:16:29.252 }, 00:16:29.252 "auth": { 00:16:29.252 "state": "completed", 00:16:29.252 "digest": "sha512", 00:16:29.252 "dhgroup": "null" 00:16:29.252 } 00:16:29.252 } 00:16:29.252 ]' 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.252 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.252 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.252 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.252 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.252 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.252 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.512 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:29.512 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:30.079 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.079 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.080 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.339 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.598 00:16:30.598 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.598 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.598 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.858 { 00:16:30.858 "cntlid": 103, 00:16:30.858 "qid": 0, 00:16:30.858 "state": "enabled", 00:16:30.858 "thread": "nvmf_tgt_poll_group_000", 00:16:30.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.858 "listen_address": { 00:16:30.858 "trtype": "TCP", 00:16:30.858 "adrfam": "IPv4", 00:16:30.858 "traddr": "10.0.0.2", 00:16:30.858 "trsvcid": "4420" 00:16:30.858 }, 00:16:30.858 "peer_address": { 00:16:30.858 "trtype": "TCP", 00:16:30.858 "adrfam": "IPv4", 00:16:30.858 "traddr": "10.0.0.1", 00:16:30.858 "trsvcid": "45562" 00:16:30.858 }, 00:16:30.858 "auth": { 00:16:30.858 "state": "completed", 00:16:30.858 "digest": "sha512", 00:16:30.858 "dhgroup": "null" 00:16:30.858 } 00:16:30.858 } 00:16:30.858 ]' 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.858 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.117 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:31.118 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.686 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.945 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.204 00:16:32.204 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.204 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.204 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.463 { 00:16:32.463 "cntlid": 105, 00:16:32.463 "qid": 0, 00:16:32.463 "state": "enabled", 00:16:32.463 "thread": "nvmf_tgt_poll_group_000", 00:16:32.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.463 "listen_address": { 00:16:32.463 "trtype": "TCP", 00:16:32.463 "adrfam": "IPv4", 00:16:32.463 "traddr": "10.0.0.2", 00:16:32.463 "trsvcid": "4420" 00:16:32.463 }, 00:16:32.463 "peer_address": { 00:16:32.463 "trtype": "TCP", 00:16:32.463 "adrfam": "IPv4", 00:16:32.463 "traddr": "10.0.0.1", 00:16:32.463 "trsvcid": "45600" 00:16:32.463 }, 00:16:32.463 "auth": { 00:16:32.463 "state": "completed", 00:16:32.463 "digest": "sha512", 00:16:32.463 "dhgroup": "ffdhe2048" 00:16:32.463 } 00:16:32.463 } 00:16:32.463 ]' 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.463 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.722 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:32.722 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.291 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.550 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.817 00:16:33.817 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.817 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.817 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.077 { 00:16:34.077 "cntlid": 107, 00:16:34.077 "qid": 0, 00:16:34.077 "state": "enabled", 00:16:34.077 "thread": "nvmf_tgt_poll_group_000", 00:16:34.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.077 "listen_address": { 00:16:34.077 "trtype": "TCP", 00:16:34.077 "adrfam": "IPv4", 00:16:34.077 "traddr": "10.0.0.2", 00:16:34.077 "trsvcid": "4420" 00:16:34.077 }, 00:16:34.077 "peer_address": { 00:16:34.077 "trtype": "TCP", 00:16:34.077 "adrfam": "IPv4", 00:16:34.077 "traddr": "10.0.0.1", 00:16:34.077 "trsvcid": "45626" 00:16:34.077 }, 00:16:34.077 "auth": { 00:16:34.077 "state": "completed", 00:16:34.077 "digest": "sha512", 00:16:34.077 "dhgroup": "ffdhe2048" 00:16:34.077 } 00:16:34.077 } 00:16:34.077 ]' 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.077 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.335 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:34.335 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.903 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.163 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.422 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.422 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.681 { 00:16:35.681 "cntlid": 109, 00:16:35.681 "qid": 0, 00:16:35.681 "state": "enabled", 00:16:35.681 "thread": "nvmf_tgt_poll_group_000", 00:16:35.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.681 "listen_address": { 00:16:35.681 "trtype": "TCP", 00:16:35.681 "adrfam": "IPv4", 00:16:35.681 "traddr": "10.0.0.2", 00:16:35.681 "trsvcid": "4420" 00:16:35.681 }, 00:16:35.681 "peer_address": { 00:16:35.681 "trtype": "TCP", 00:16:35.681 "adrfam": "IPv4", 00:16:35.681 "traddr": "10.0.0.1", 00:16:35.681 "trsvcid": "45654" 00:16:35.681 }, 00:16:35.681 "auth": { 00:16:35.681 "state": "completed", 00:16:35.681 "digest": "sha512", 00:16:35.681 "dhgroup": "ffdhe2048" 00:16:35.681 } 00:16:35.681 } 00:16:35.681 ]' 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.681 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.939 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:35.939 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.507 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.766 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.025 00:16:37.025 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.025 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.025 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.284 { 00:16:37.284 "cntlid": 111, 00:16:37.284 "qid": 0, 00:16:37.284 "state": "enabled", 00:16:37.284 "thread": "nvmf_tgt_poll_group_000", 00:16:37.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.284 "listen_address": { 00:16:37.284 "trtype": "TCP", 00:16:37.284 "adrfam": "IPv4", 00:16:37.284 "traddr": "10.0.0.2", 00:16:37.284 "trsvcid": "4420" 00:16:37.284 }, 00:16:37.284 "peer_address": { 00:16:37.284 "trtype": "TCP", 00:16:37.284 "adrfam": "IPv4", 00:16:37.284 "traddr": "10.0.0.1", 00:16:37.284 "trsvcid": "45680" 00:16:37.284 }, 00:16:37.284 "auth": { 00:16:37.284 "state": "completed", 00:16:37.284 "digest": "sha512", 00:16:37.284 "dhgroup": "ffdhe2048" 00:16:37.284 } 00:16:37.284 } 00:16:37.284 ]' 00:16:37.284 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.284 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.284 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.284 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.285 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.285 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.285 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.285 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.544 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:37.544 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.112 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.371 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.630 00:16:38.631 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.631 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.631 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.890 { 00:16:38.890 "cntlid": 113, 00:16:38.890 "qid": 0, 00:16:38.890 "state": "enabled", 00:16:38.890 "thread": "nvmf_tgt_poll_group_000", 00:16:38.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.890 "listen_address": { 00:16:38.890 "trtype": "TCP", 00:16:38.890 "adrfam": "IPv4", 00:16:38.890 "traddr": "10.0.0.2", 00:16:38.890 "trsvcid": "4420" 00:16:38.890 }, 00:16:38.890 "peer_address": { 00:16:38.890 "trtype": "TCP", 00:16:38.890 "adrfam": "IPv4", 00:16:38.890 "traddr": "10.0.0.1", 00:16:38.890 "trsvcid": "45708" 00:16:38.890 }, 00:16:38.890 "auth": { 00:16:38.890 "state": "completed", 00:16:38.890 "digest": "sha512", 00:16:38.890 "dhgroup": "ffdhe3072" 00:16:38.890 } 00:16:38.890 } 00:16:38.890 ]' 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.890 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.148 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:39.148 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.715 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.973 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.231 00:16:40.231 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.231 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.231 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.489 { 00:16:40.489 "cntlid": 115, 00:16:40.489 "qid": 0, 00:16:40.489 "state": "enabled", 00:16:40.489 "thread": "nvmf_tgt_poll_group_000", 00:16:40.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.489 "listen_address": { 00:16:40.489 "trtype": "TCP", 00:16:40.489 "adrfam": "IPv4", 00:16:40.489 "traddr": "10.0.0.2", 00:16:40.489 "trsvcid": "4420" 00:16:40.489 }, 00:16:40.489 "peer_address": { 00:16:40.489 "trtype": "TCP", 00:16:40.489 "adrfam": "IPv4", 00:16:40.489 "traddr": "10.0.0.1", 00:16:40.489 "trsvcid": "59574" 00:16:40.489 }, 00:16:40.489 "auth": { 00:16:40.489 "state": "completed", 00:16:40.489 "digest": "sha512", 00:16:40.489 "dhgroup": "ffdhe3072" 00:16:40.489 } 00:16:40.489 } 00:16:40.489 ]' 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.489 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.747 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:40.747 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.313 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.571 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.831 00:16:41.831 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.831 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.831 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.090 { 00:16:42.090 "cntlid": 117, 00:16:42.090 "qid": 0, 00:16:42.090 "state": "enabled", 00:16:42.090 "thread": "nvmf_tgt_poll_group_000", 00:16:42.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.090 "listen_address": { 00:16:42.090 "trtype": "TCP", 00:16:42.090 "adrfam": "IPv4", 00:16:42.090 "traddr": "10.0.0.2", 00:16:42.090 "trsvcid": "4420" 00:16:42.090 }, 00:16:42.090 "peer_address": { 00:16:42.090 "trtype": "TCP", 00:16:42.090 "adrfam": "IPv4", 00:16:42.090 "traddr": "10.0.0.1", 00:16:42.090 "trsvcid": "59600" 00:16:42.090 }, 00:16:42.090 "auth": { 00:16:42.090 "state": "completed", 00:16:42.090 "digest": "sha512", 00:16:42.090 "dhgroup": "ffdhe3072" 00:16:42.090 } 00:16:42.090 } 00:16:42.090 ]' 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.090 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.348 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:42.349 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.916 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.917 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.917 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:43.176 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.177 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.177 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.177 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.177 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.177 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.436 00:16:43.436 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.436 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.436 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.695 { 00:16:43.695 "cntlid": 119, 00:16:43.695 "qid": 0, 00:16:43.695 "state": "enabled", 00:16:43.695 "thread": "nvmf_tgt_poll_group_000", 00:16:43.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.695 "listen_address": { 00:16:43.695 "trtype": "TCP", 00:16:43.695 "adrfam": "IPv4", 00:16:43.695 "traddr": "10.0.0.2", 00:16:43.695 "trsvcid": "4420" 00:16:43.695 }, 00:16:43.695 "peer_address": { 00:16:43.695 "trtype": "TCP", 00:16:43.695 "adrfam": "IPv4", 00:16:43.695 "traddr": "10.0.0.1", 00:16:43.695 "trsvcid": "59612" 00:16:43.695 }, 00:16:43.695 "auth": { 00:16:43.695 "state": "completed", 00:16:43.695 "digest": "sha512", 00:16:43.695 "dhgroup": "ffdhe3072" 00:16:43.695 } 00:16:43.695 } 00:16:43.695 ]' 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.695 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.954 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:43.954 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.522 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.781 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.040 00:16:45.040 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.040 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.040 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.299 { 00:16:45.299 "cntlid": 121, 00:16:45.299 "qid": 0, 00:16:45.299 "state": "enabled", 00:16:45.299 "thread": "nvmf_tgt_poll_group_000", 00:16:45.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.299 "listen_address": { 00:16:45.299 "trtype": "TCP", 00:16:45.299 "adrfam": "IPv4", 00:16:45.299 "traddr": "10.0.0.2", 00:16:45.299 "trsvcid": "4420" 00:16:45.299 }, 00:16:45.299 "peer_address": { 00:16:45.299 "trtype": "TCP", 00:16:45.299 "adrfam": "IPv4", 00:16:45.299 "traddr": "10.0.0.1", 00:16:45.299 "trsvcid": "59646" 00:16:45.299 }, 00:16:45.299 "auth": { 00:16:45.299 "state": "completed", 00:16:45.299 "digest": "sha512", 00:16:45.299 "dhgroup": "ffdhe4096" 00:16:45.299 } 00:16:45.299 } 00:16:45.299 ]' 00:16:45.299 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.299 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.558 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:45.558 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.126 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.385 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.644 00:16:46.644 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.644 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.644 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.903 { 00:16:46.903 "cntlid": 123, 00:16:46.903 "qid": 0, 00:16:46.903 "state": "enabled", 00:16:46.903 "thread": "nvmf_tgt_poll_group_000", 00:16:46.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.903 "listen_address": { 00:16:46.903 "trtype": "TCP", 00:16:46.903 "adrfam": "IPv4", 00:16:46.903 "traddr": "10.0.0.2", 00:16:46.903 "trsvcid": "4420" 00:16:46.903 }, 00:16:46.903 "peer_address": { 00:16:46.903 "trtype": "TCP", 00:16:46.903 "adrfam": "IPv4", 00:16:46.903 "traddr": "10.0.0.1", 00:16:46.903 "trsvcid": "59682" 00:16:46.903 }, 00:16:46.903 "auth": { 00:16:46.903 "state": "completed", 00:16:46.903 "digest": "sha512", 00:16:46.903 "dhgroup": "ffdhe4096" 00:16:46.903 } 00:16:46.903 } 00:16:46.903 ]' 00:16:46.903 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.904 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.163 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:47.163 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.730 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.990 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.249 00:16:48.249 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.249 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.249 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.509 { 00:16:48.509 "cntlid": 125, 00:16:48.509 "qid": 0, 00:16:48.509 "state": "enabled", 00:16:48.509 "thread": "nvmf_tgt_poll_group_000", 00:16:48.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.509 "listen_address": { 00:16:48.509 "trtype": "TCP", 00:16:48.509 "adrfam": "IPv4", 00:16:48.509 "traddr": "10.0.0.2", 00:16:48.509 "trsvcid": "4420" 00:16:48.509 }, 00:16:48.509 "peer_address": { 00:16:48.509 "trtype": "TCP", 00:16:48.509 "adrfam": "IPv4", 00:16:48.509 "traddr": "10.0.0.1", 00:16:48.509 "trsvcid": "59716" 00:16:48.509 }, 00:16:48.509 "auth": { 00:16:48.509 "state": "completed", 00:16:48.509 "digest": "sha512", 00:16:48.509 "dhgroup": "ffdhe4096" 00:16:48.509 } 00:16:48.509 } 00:16:48.509 ]' 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.509 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.768 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:48.768 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.336 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.595 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.596 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.855 00:16:49.855 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.855 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.855 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.114 { 00:16:50.114 "cntlid": 127, 00:16:50.114 "qid": 0, 00:16:50.114 "state": "enabled", 00:16:50.114 "thread": "nvmf_tgt_poll_group_000", 00:16:50.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.114 "listen_address": { 00:16:50.114 "trtype": "TCP", 00:16:50.114 "adrfam": "IPv4", 00:16:50.114 "traddr": "10.0.0.2", 00:16:50.114 "trsvcid": "4420" 00:16:50.114 }, 00:16:50.114 "peer_address": { 00:16:50.114 "trtype": "TCP", 00:16:50.114 "adrfam": "IPv4", 00:16:50.114 "traddr": "10.0.0.1", 00:16:50.114 "trsvcid": "59740" 00:16:50.114 }, 00:16:50.114 "auth": { 00:16:50.114 "state": "completed", 00:16:50.114 "digest": "sha512", 00:16:50.114 "dhgroup": "ffdhe4096" 00:16:50.114 } 00:16:50.114 } 00:16:50.114 ]' 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.114 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.374 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:50.374 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.942 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.201 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.478 00:16:51.478 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.478 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.478 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.806 { 00:16:51.806 "cntlid": 129, 00:16:51.806 "qid": 0, 00:16:51.806 "state": "enabled", 00:16:51.806 "thread": "nvmf_tgt_poll_group_000", 00:16:51.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.806 "listen_address": { 00:16:51.806 "trtype": "TCP", 00:16:51.806 "adrfam": "IPv4", 00:16:51.806 "traddr": "10.0.0.2", 00:16:51.806 "trsvcid": "4420" 00:16:51.806 }, 00:16:51.806 "peer_address": { 00:16:51.806 "trtype": "TCP", 00:16:51.806 "adrfam": "IPv4", 00:16:51.806 "traddr": "10.0.0.1", 00:16:51.806 "trsvcid": "42588" 00:16:51.806 }, 00:16:51.806 "auth": { 00:16:51.806 "state": "completed", 00:16:51.806 "digest": "sha512", 00:16:51.806 "dhgroup": "ffdhe6144" 00:16:51.806 } 00:16:51.806 } 00:16:51.806 ]' 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.806 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.065 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:52.065 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.633 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.891 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.892 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.150 00:16:53.150 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.150 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.150 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.409 { 00:16:53.409 "cntlid": 131, 00:16:53.409 "qid": 0, 00:16:53.409 "state": "enabled", 00:16:53.409 "thread": "nvmf_tgt_poll_group_000", 00:16:53.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.409 "listen_address": { 00:16:53.409 "trtype": "TCP", 00:16:53.409 "adrfam": "IPv4", 00:16:53.409 "traddr": "10.0.0.2", 00:16:53.409 "trsvcid": "4420" 00:16:53.409 }, 00:16:53.409 "peer_address": { 00:16:53.409 "trtype": "TCP", 00:16:53.409 "adrfam": "IPv4", 00:16:53.409 "traddr": "10.0.0.1", 00:16:53.409 "trsvcid": "42620" 00:16:53.409 }, 00:16:53.409 "auth": { 00:16:53.409 "state": "completed", 00:16:53.409 "digest": "sha512", 00:16:53.409 "dhgroup": "ffdhe6144" 00:16:53.409 } 00:16:53.409 } 00:16:53.409 ]' 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.409 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.669 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:53.669 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.236 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.495 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.754 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.013 { 00:16:55.013 "cntlid": 133, 00:16:55.013 "qid": 0, 00:16:55.013 "state": "enabled", 00:16:55.013 "thread": "nvmf_tgt_poll_group_000", 00:16:55.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.013 "listen_address": { 00:16:55.013 "trtype": "TCP", 00:16:55.013 "adrfam": "IPv4", 00:16:55.013 "traddr": "10.0.0.2", 00:16:55.013 "trsvcid": "4420" 00:16:55.013 }, 00:16:55.013 "peer_address": { 00:16:55.013 "trtype": "TCP", 00:16:55.013 "adrfam": "IPv4", 00:16:55.013 "traddr": "10.0.0.1", 00:16:55.013 "trsvcid": "42640" 00:16:55.013 }, 00:16:55.013 "auth": { 00:16:55.013 "state": "completed", 00:16:55.013 "digest": "sha512", 00:16:55.013 "dhgroup": "ffdhe6144" 00:16:55.013 } 00:16:55.013 } 00:16:55.013 ]' 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.013 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.272 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.272 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.272 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.272 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.272 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.531 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:55.531 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.106 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:56.107 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.107 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.107 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.107 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.107 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.107 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.674 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.674 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.933 { 00:16:56.933 "cntlid": 135, 00:16:56.933 "qid": 0, 00:16:56.933 "state": "enabled", 00:16:56.933 "thread": "nvmf_tgt_poll_group_000", 00:16:56.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.933 "listen_address": { 00:16:56.933 "trtype": "TCP", 00:16:56.933 "adrfam": "IPv4", 00:16:56.933 "traddr": "10.0.0.2", 00:16:56.933 "trsvcid": "4420" 00:16:56.933 }, 00:16:56.933 "peer_address": { 00:16:56.933 "trtype": "TCP", 00:16:56.933 "adrfam": "IPv4", 00:16:56.933 "traddr": "10.0.0.1", 00:16:56.933 "trsvcid": "42678" 00:16:56.933 }, 00:16:56.933 "auth": { 00:16:56.933 "state": "completed", 00:16:56.933 "digest": "sha512", 00:16:56.933 "dhgroup": "ffdhe6144" 00:16:56.933 } 00:16:56.933 } 00:16:56.933 ]' 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.933 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.192 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:57.192 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.759 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.018 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.587 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.587 { 00:16:58.587 "cntlid": 137, 00:16:58.587 "qid": 0, 00:16:58.587 "state": "enabled", 00:16:58.587 "thread": "nvmf_tgt_poll_group_000", 00:16:58.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.587 "listen_address": { 00:16:58.587 "trtype": "TCP", 00:16:58.587 "adrfam": "IPv4", 00:16:58.587 "traddr": "10.0.0.2", 00:16:58.587 "trsvcid": "4420" 00:16:58.587 }, 00:16:58.587 "peer_address": { 00:16:58.587 "trtype": "TCP", 00:16:58.587 "adrfam": "IPv4", 00:16:58.587 "traddr": "10.0.0.1", 00:16:58.587 "trsvcid": "42704" 00:16:58.587 }, 00:16:58.587 "auth": { 00:16:58.587 "state": "completed", 00:16:58.587 "digest": "sha512", 00:16:58.587 "dhgroup": "ffdhe8192" 00:16:58.587 } 00:16:58.587 } 00:16:58.587 ]' 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.587 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:58.846 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:16:59.413 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.413 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.684 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.684 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.684 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.684 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.685 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.686 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.267 00:17:00.267 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.267 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.267 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.526 { 00:17:00.526 "cntlid": 139, 00:17:00.526 "qid": 0, 00:17:00.526 "state": "enabled", 00:17:00.526 "thread": "nvmf_tgt_poll_group_000", 00:17:00.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.526 "listen_address": { 00:17:00.526 "trtype": "TCP", 00:17:00.526 "adrfam": "IPv4", 00:17:00.526 "traddr": "10.0.0.2", 00:17:00.526 "trsvcid": "4420" 00:17:00.526 }, 00:17:00.526 "peer_address": { 00:17:00.526 "trtype": "TCP", 00:17:00.526 "adrfam": "IPv4", 00:17:00.526 "traddr": "10.0.0.1", 00:17:00.526 "trsvcid": "58854" 00:17:00.526 }, 00:17:00.526 "auth": { 00:17:00.526 "state": "completed", 00:17:00.526 "digest": "sha512", 00:17:00.526 "dhgroup": "ffdhe8192" 00:17:00.526 } 00:17:00.526 } 00:17:00.526 ]' 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.526 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.785 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:17:00.785 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: --dhchap-ctrl-secret DHHC-1:02:ODNkNDEzY2U5MTU1NWNmOGMxZDJlOTgwMDAwOTYyOTM5NDBkY2EwYjZkYTgzZDFjOUOomw==: 00:17:01.353 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.353 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.353 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.354 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.354 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.354 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.354 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.354 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.613 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.181 00:17:02.181 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.181 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.181 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.181 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.181 { 00:17:02.181 "cntlid": 141, 00:17:02.181 "qid": 0, 00:17:02.181 "state": "enabled", 00:17:02.181 "thread": "nvmf_tgt_poll_group_000", 00:17:02.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.181 "listen_address": { 00:17:02.181 "trtype": "TCP", 00:17:02.181 "adrfam": "IPv4", 00:17:02.181 "traddr": "10.0.0.2", 00:17:02.181 "trsvcid": "4420" 00:17:02.181 }, 00:17:02.181 "peer_address": { 00:17:02.181 "trtype": "TCP", 00:17:02.182 "adrfam": "IPv4", 00:17:02.182 "traddr": "10.0.0.1", 00:17:02.182 "trsvcid": "58894" 00:17:02.182 }, 00:17:02.182 "auth": { 00:17:02.182 "state": "completed", 00:17:02.182 "digest": "sha512", 00:17:02.182 "dhgroup": "ffdhe8192" 00:17:02.182 } 00:17:02.182 } 00:17:02.182 ]' 00:17:02.182 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.440 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.700 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:17:02.700 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:01:ZDM4NGNlOTQxOGZlMTM4Y2MzYzVmNjAwMjYxODVhMDHzESIb: 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.269 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.269 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.836 00:17:03.836 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.836 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.836 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.095 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.095 { 00:17:04.095 "cntlid": 143, 00:17:04.095 "qid": 0, 00:17:04.095 "state": "enabled", 00:17:04.095 "thread": "nvmf_tgt_poll_group_000", 00:17:04.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.095 "listen_address": { 00:17:04.095 "trtype": "TCP", 00:17:04.096 "adrfam": "IPv4", 00:17:04.096 "traddr": "10.0.0.2", 00:17:04.096 "trsvcid": "4420" 00:17:04.096 }, 00:17:04.096 "peer_address": { 00:17:04.096 "trtype": "TCP", 00:17:04.096 "adrfam": "IPv4", 00:17:04.096 "traddr": "10.0.0.1", 00:17:04.096 "trsvcid": "58922" 00:17:04.096 }, 00:17:04.096 "auth": { 00:17:04.096 "state": "completed", 00:17:04.096 "digest": "sha512", 00:17:04.096 "dhgroup": "ffdhe8192" 00:17:04.096 } 00:17:04.096 } 00:17:04.096 ]' 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.096 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.355 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:04.355 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:04.923 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.182 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.182 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.182 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.182 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.182 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.751 00:17:05.751 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.751 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.751 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.009 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.009 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.009 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.010 { 00:17:06.010 "cntlid": 145, 00:17:06.010 "qid": 0, 00:17:06.010 "state": "enabled", 00:17:06.010 "thread": "nvmf_tgt_poll_group_000", 00:17:06.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.010 "listen_address": { 00:17:06.010 "trtype": "TCP", 00:17:06.010 "adrfam": "IPv4", 00:17:06.010 "traddr": "10.0.0.2", 00:17:06.010 "trsvcid": "4420" 00:17:06.010 }, 00:17:06.010 "peer_address": { 00:17:06.010 "trtype": "TCP", 00:17:06.010 "adrfam": "IPv4", 00:17:06.010 "traddr": "10.0.0.1", 00:17:06.010 "trsvcid": "58952" 00:17:06.010 }, 00:17:06.010 "auth": { 00:17:06.010 "state": "completed", 00:17:06.010 "digest": "sha512", 00:17:06.010 "dhgroup": "ffdhe8192" 00:17:06.010 } 00:17:06.010 } 00:17:06.010 ]' 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.010 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.269 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:17:06.269 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDlkYjkzOTBlM2FjYjNhN2UxNzAxMGY4MDllNjc2MjI0ZmEzZWFjYzMyMzBmZmUxa1RDDQ==: --dhchap-ctrl-secret DHHC-1:03:ZDc5Y2YyNzdiMDA2MDc4MjhhNTJkOWYxOGE5N2VjNmU2ZmIyZjRlZDA3NWY4MTIzMDkxZmU1MDQ3YThmNjM5MpbL/vk=: 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:06.836 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:07.404 request: 00:17:07.404 { 00:17:07.404 "name": "nvme0", 00:17:07.404 "trtype": "tcp", 00:17:07.404 "traddr": "10.0.0.2", 00:17:07.404 "adrfam": "ipv4", 00:17:07.404 "trsvcid": "4420", 00:17:07.404 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.404 "prchk_reftag": false, 00:17:07.404 "prchk_guard": false, 00:17:07.404 "hdgst": false, 00:17:07.404 "ddgst": false, 00:17:07.404 "dhchap_key": "key2", 00:17:07.404 "allow_unrecognized_csi": false, 00:17:07.404 "method": "bdev_nvme_attach_controller", 00:17:07.404 "req_id": 1 00:17:07.404 } 00:17:07.404 Got JSON-RPC error response 00:17:07.404 response: 00:17:07.404 { 00:17:07.404 "code": -5, 00:17:07.404 "message": "Input/output error" 00:17:07.404 } 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.404 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.971 request: 00:17:07.971 { 00:17:07.971 "name": "nvme0", 00:17:07.971 "trtype": "tcp", 00:17:07.971 "traddr": "10.0.0.2", 00:17:07.971 "adrfam": "ipv4", 00:17:07.971 "trsvcid": "4420", 00:17:07.971 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.971 "prchk_reftag": false, 00:17:07.971 "prchk_guard": false, 00:17:07.971 "hdgst": false, 00:17:07.971 "ddgst": false, 00:17:07.971 "dhchap_key": "key1", 00:17:07.971 "dhchap_ctrlr_key": "ckey2", 00:17:07.971 "allow_unrecognized_csi": false, 00:17:07.971 "method": "bdev_nvme_attach_controller", 00:17:07.971 "req_id": 1 00:17:07.971 } 00:17:07.971 Got JSON-RPC error response 00:17:07.971 response: 00:17:07.971 { 00:17:07.971 "code": -5, 00:17:07.971 "message": "Input/output error" 00:17:07.971 } 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.971 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.230 request: 00:17:08.230 { 00:17:08.230 "name": "nvme0", 00:17:08.230 "trtype": "tcp", 00:17:08.230 "traddr": "10.0.0.2", 00:17:08.230 "adrfam": "ipv4", 00:17:08.230 "trsvcid": "4420", 00:17:08.230 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.230 "prchk_reftag": false, 00:17:08.230 "prchk_guard": false, 00:17:08.230 "hdgst": false, 00:17:08.230 "ddgst": false, 00:17:08.230 "dhchap_key": "key1", 00:17:08.230 "dhchap_ctrlr_key": "ckey1", 00:17:08.230 "allow_unrecognized_csi": false, 00:17:08.230 "method": "bdev_nvme_attach_controller", 00:17:08.230 "req_id": 1 00:17:08.230 } 00:17:08.230 Got JSON-RPC error response 00:17:08.230 response: 00:17:08.230 { 00:17:08.230 "code": -5, 00:17:08.230 "message": "Input/output error" 00:17:08.230 } 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2141460 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2141460 ']' 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2141460 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141460 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141460' 00:17:08.230 killing process with pid 2141460 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2141460 00:17:08.230 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2141460 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2163577 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2163577 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2163577 ']' 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.490 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.749 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.749 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2163577 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2163577 ']' 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.750 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 null0 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yNk 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.da9 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.da9 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ZWF 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.zTo ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zTo 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yDY 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.p1z ]] 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p1z 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.009 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M3E 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.267 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.834 nvme0n1 00:17:09.834 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.834 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.834 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.093 { 00:17:10.093 "cntlid": 1, 00:17:10.093 "qid": 0, 00:17:10.093 "state": "enabled", 00:17:10.093 "thread": "nvmf_tgt_poll_group_000", 00:17:10.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.093 "listen_address": { 00:17:10.093 "trtype": "TCP", 00:17:10.093 "adrfam": "IPv4", 00:17:10.093 "traddr": "10.0.0.2", 00:17:10.093 "trsvcid": "4420" 00:17:10.093 }, 00:17:10.093 "peer_address": { 00:17:10.093 "trtype": "TCP", 00:17:10.093 "adrfam": "IPv4", 00:17:10.093 "traddr": "10.0.0.1", 00:17:10.093 "trsvcid": "59002" 00:17:10.093 }, 00:17:10.093 "auth": { 00:17:10.093 "state": "completed", 00:17:10.093 "digest": "sha512", 00:17:10.093 "dhgroup": "ffdhe8192" 00:17:10.093 } 00:17:10.093 } 00:17:10.093 ]' 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.093 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.352 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.352 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.352 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.352 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.352 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.611 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:10.611 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:11.182 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.182 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:11.440 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.441 request: 00:17:11.441 { 00:17:11.441 "name": "nvme0", 00:17:11.441 "trtype": "tcp", 00:17:11.441 "traddr": "10.0.0.2", 00:17:11.441 "adrfam": "ipv4", 00:17:11.441 "trsvcid": "4420", 00:17:11.441 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.441 "prchk_reftag": false, 00:17:11.441 "prchk_guard": false, 00:17:11.441 "hdgst": false, 00:17:11.441 "ddgst": false, 00:17:11.441 "dhchap_key": "key3", 00:17:11.441 "allow_unrecognized_csi": false, 00:17:11.441 "method": "bdev_nvme_attach_controller", 00:17:11.441 "req_id": 1 00:17:11.441 } 00:17:11.441 Got JSON-RPC error response 00:17:11.441 response: 00:17:11.441 { 00:17:11.441 "code": -5, 00:17:11.441 "message": "Input/output error" 00:17:11.441 } 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:11.441 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.699 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.957 request: 00:17:11.957 { 00:17:11.957 "name": "nvme0", 00:17:11.957 "trtype": "tcp", 00:17:11.957 "traddr": "10.0.0.2", 00:17:11.957 "adrfam": "ipv4", 00:17:11.957 "trsvcid": "4420", 00:17:11.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.957 "prchk_reftag": false, 00:17:11.957 "prchk_guard": false, 00:17:11.957 "hdgst": false, 00:17:11.957 "ddgst": false, 00:17:11.957 "dhchap_key": "key3", 00:17:11.957 "allow_unrecognized_csi": false, 00:17:11.957 "method": "bdev_nvme_attach_controller", 00:17:11.957 "req_id": 1 00:17:11.957 } 00:17:11.957 Got JSON-RPC error response 00:17:11.957 response: 00:17:11.957 { 00:17:11.957 "code": -5, 00:17:11.957 "message": "Input/output error" 00:17:11.957 } 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:11.957 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.958 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.958 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.216 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.475 request: 00:17:12.475 { 00:17:12.475 "name": "nvme0", 00:17:12.475 "trtype": "tcp", 00:17:12.475 "traddr": "10.0.0.2", 00:17:12.475 "adrfam": "ipv4", 00:17:12.475 "trsvcid": "4420", 00:17:12.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.475 "prchk_reftag": false, 00:17:12.475 "prchk_guard": false, 00:17:12.475 "hdgst": false, 00:17:12.475 "ddgst": false, 00:17:12.475 "dhchap_key": "key0", 00:17:12.475 "dhchap_ctrlr_key": "key1", 00:17:12.475 "allow_unrecognized_csi": false, 00:17:12.475 "method": "bdev_nvme_attach_controller", 00:17:12.475 "req_id": 1 00:17:12.475 } 00:17:12.475 Got JSON-RPC error response 00:17:12.475 response: 00:17:12.475 { 00:17:12.475 "code": -5, 00:17:12.475 "message": "Input/output error" 00:17:12.475 } 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:12.475 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:12.733 nvme0n1 00:17:12.733 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:12.733 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:12.733 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.992 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.992 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.992 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.250 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:13.250 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.250 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.250 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.250 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:13.251 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:13.251 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:13.818 nvme0n1 00:17:13.818 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:13.818 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:13.818 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.078 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:14.337 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.337 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:14.337 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: --dhchap-ctrl-secret DHHC-1:03:MmI0NmZjNmIwMjRiYzM5NGM5Zjg3MDYyNzUxMTRmYTFjYmZjMjI5YTdmNTA4N2JhYjRhNjg4ZGJjOWYzZDkxNBBkPYw=: 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.905 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.164 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.746 request: 00:17:15.746 { 00:17:15.746 "name": "nvme0", 00:17:15.746 "trtype": "tcp", 00:17:15.746 "traddr": "10.0.0.2", 00:17:15.746 "adrfam": "ipv4", 00:17:15.746 "trsvcid": "4420", 00:17:15.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.746 "prchk_reftag": false, 00:17:15.746 "prchk_guard": false, 00:17:15.746 "hdgst": false, 00:17:15.746 "ddgst": false, 00:17:15.746 "dhchap_key": "key1", 00:17:15.746 "allow_unrecognized_csi": false, 00:17:15.746 "method": "bdev_nvme_attach_controller", 00:17:15.746 "req_id": 1 00:17:15.746 } 00:17:15.746 Got JSON-RPC error response 00:17:15.746 response: 00:17:15.746 { 00:17:15.746 "code": -5, 00:17:15.746 "message": "Input/output error" 00:17:15.746 } 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.746 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.314 nvme0n1 00:17:16.314 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:16.314 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:16.314 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.572 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.572 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.572 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:16.832 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:17.091 nvme0n1 00:17:17.091 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:17.091 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:17.091 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: '' 2s 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: ]] 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDRiNDVlZjQzNDc2YzViNDI1MzY4MmFiODk4MjBmMWVGbMsY: 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.351 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: 2s 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: ]] 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDMyODFmMWVhZDM1OTQ4OTFmNDZhODY3OTFjNDY4ODEyYzYwZmRiODU2YjhiOTE3pqqykA==: 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:19.885 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:21.793 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:21.794 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.361 nvme0n1 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.361 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:22.928 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:23.187 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:23.187 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:23.187 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:23.446 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:24.014 request: 00:17:24.014 { 00:17:24.014 "name": "nvme0", 00:17:24.014 "dhchap_key": "key1", 00:17:24.014 "dhchap_ctrlr_key": "key3", 00:17:24.014 "method": "bdev_nvme_set_keys", 00:17:24.014 "req_id": 1 00:17:24.014 } 00:17:24.014 Got JSON-RPC error response 00:17:24.014 response: 00:17:24.014 { 00:17:24.014 "code": -13, 00:17:24.014 "message": "Permission denied" 00:17:24.014 } 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:24.014 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:25.392 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:25.392 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:25.392 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.393 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.962 nvme0n1 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.220 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.221 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:26.479 request: 00:17:26.479 { 00:17:26.479 "name": "nvme0", 00:17:26.479 "dhchap_key": "key2", 00:17:26.479 "dhchap_ctrlr_key": "key0", 00:17:26.479 "method": "bdev_nvme_set_keys", 00:17:26.479 "req_id": 1 00:17:26.479 } 00:17:26.479 Got JSON-RPC error response 00:17:26.479 response: 00:17:26.479 { 00:17:26.479 "code": -13, 00:17:26.479 "message": "Permission denied" 00:17:26.479 } 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:26.738 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:28.115 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:28.115 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2141487 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2141487 ']' 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2141487 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141487 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141487' 00:17:28.116 killing process with pid 2141487 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2141487 00:17:28.116 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2141487 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.375 rmmod nvme_tcp 00:17:28.375 rmmod nvme_fabrics 00:17:28.375 rmmod nvme_keyring 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2163577 ']' 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2163577 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2163577 ']' 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2163577 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163577 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163577' 00:17:28.375 killing process with pid 2163577 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2163577 00:17:28.375 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2163577 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.634 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yNk /tmp/spdk.key-sha256.ZWF /tmp/spdk.key-sha384.yDY /tmp/spdk.key-sha512.M3E /tmp/spdk.key-sha512.da9 /tmp/spdk.key-sha384.zTo /tmp/spdk.key-sha256.p1z '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:31.171 00:17:31.171 real 2m33.658s 00:17:31.171 user 5m54.405s 00:17:31.171 sys 0m24.345s 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 ************************************ 00:17:31.171 END TEST nvmf_auth_target 00:17:31.171 ************************************ 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 ************************************ 00:17:31.171 START TEST nvmf_bdevio_no_huge 00:17:31.171 ************************************ 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:31.171 * Looking for test storage... 00:17:31.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.171 --rc genhtml_branch_coverage=1 00:17:31.171 --rc genhtml_function_coverage=1 00:17:31.171 --rc genhtml_legend=1 00:17:31.171 --rc geninfo_all_blocks=1 00:17:31.171 --rc geninfo_unexecuted_blocks=1 00:17:31.171 00:17:31.171 ' 00:17:31.171 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.172 --rc genhtml_branch_coverage=1 00:17:31.172 --rc genhtml_function_coverage=1 00:17:31.172 --rc genhtml_legend=1 00:17:31.172 --rc geninfo_all_blocks=1 00:17:31.172 --rc geninfo_unexecuted_blocks=1 00:17:31.172 00:17:31.172 ' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.172 --rc genhtml_branch_coverage=1 00:17:31.172 --rc genhtml_function_coverage=1 00:17:31.172 --rc genhtml_legend=1 00:17:31.172 --rc geninfo_all_blocks=1 00:17:31.172 --rc geninfo_unexecuted_blocks=1 00:17:31.172 00:17:31.172 ' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.172 --rc genhtml_branch_coverage=1 00:17:31.172 --rc genhtml_function_coverage=1 00:17:31.172 --rc genhtml_legend=1 00:17:31.172 --rc geninfo_all_blocks=1 00:17:31.172 --rc geninfo_unexecuted_blocks=1 00:17:31.172 00:17:31.172 ' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.172 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.577 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.577 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.577 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.577 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.578 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:36.838 00:17:36.838 --- 10.0.0.2 ping statistics --- 00:17:36.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.838 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:17:36.838 00:17:36.838 --- 10.0.0.1 ping statistics --- 00:17:36.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.838 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2170559 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2170559 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2170559 ']' 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.839 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 [2024-11-20 15:26:40.765853] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:37.098 [2024-11-20 15:26:40.765901] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:37.098 [2024-11-20 15:26:40.850275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.098 [2024-11-20 15:26:40.897864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.098 [2024-11-20 15:26:40.897902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.098 [2024-11-20 15:26:40.897909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.098 [2024-11-20 15:26:40.897915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.098 [2024-11-20 15:26:40.897920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.098 [2024-11-20 15:26:40.899070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:37.098 [2024-11-20 15:26:40.899178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:37.098 [2024-11-20 15:26:40.899284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.098 [2024-11-20 15:26:40.899285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 [2024-11-20 15:26:41.646315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 Malloc0 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.033 [2024-11-20 15:26:41.690584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:38.033 { 00:17:38.033 "params": { 00:17:38.033 "name": "Nvme$subsystem", 00:17:38.033 "trtype": "$TEST_TRANSPORT", 00:17:38.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.033 "adrfam": "ipv4", 00:17:38.033 "trsvcid": "$NVMF_PORT", 00:17:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.033 "hdgst": ${hdgst:-false}, 00:17:38.033 "ddgst": ${ddgst:-false} 00:17:38.033 }, 00:17:38.033 "method": "bdev_nvme_attach_controller" 00:17:38.033 } 00:17:38.033 EOF 00:17:38.033 )") 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:38.033 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:38.033 "params": { 00:17:38.033 "name": "Nvme1", 00:17:38.033 "trtype": "tcp", 00:17:38.033 "traddr": "10.0.0.2", 00:17:38.033 "adrfam": "ipv4", 00:17:38.033 "trsvcid": "4420", 00:17:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.033 "hdgst": false, 00:17:38.033 "ddgst": false 00:17:38.033 }, 00:17:38.033 "method": "bdev_nvme_attach_controller" 00:17:38.033 }' 00:17:38.033 [2024-11-20 15:26:41.743580] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:38.033 [2024-11-20 15:26:41.743624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2170723 ] 00:17:38.033 [2024-11-20 15:26:41.821211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.033 [2024-11-20 15:26:41.870315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.033 [2024-11-20 15:26:41.870426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.033 [2024-11-20 15:26:41.870425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.292 I/O targets: 00:17:38.292 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:38.292 00:17:38.292 00:17:38.292 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.292 http://cunit.sourceforge.net/ 00:17:38.292 00:17:38.292 00:17:38.292 Suite: bdevio tests on: Nvme1n1 00:17:38.292 Test: blockdev write read block ...passed 00:17:38.292 Test: blockdev write zeroes read block ...passed 00:17:38.292 Test: blockdev write zeroes read no split ...passed 00:17:38.292 Test: blockdev write zeroes read split ...passed 00:17:38.549 Test: blockdev write zeroes read split partial ...passed 00:17:38.549 Test: blockdev reset ...[2024-11-20 15:26:42.202312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:38.549 [2024-11-20 15:26:42.202374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bd920 (9): Bad file descriptor 00:17:38.549 [2024-11-20 15:26:42.218446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:38.549 passed 00:17:38.549 Test: blockdev write read 8 blocks ...passed 00:17:38.549 Test: blockdev write read size > 128k ...passed 00:17:38.549 Test: blockdev write read invalid size ...passed 00:17:38.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:38.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:38.549 Test: blockdev write read max offset ...passed 00:17:38.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:38.549 Test: blockdev writev readv 8 blocks ...passed 00:17:38.549 Test: blockdev writev readv 30 x 1block ...passed 00:17:38.549 Test: blockdev writev readv block ...passed 00:17:38.806 Test: blockdev writev readv size > 128k ...passed 00:17:38.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:38.806 Test: blockdev comparev and writev ...[2024-11-20 15:26:42.469944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.469976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.469992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.470778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.807 [2024-11-20 15:26:42.470792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.807 passed 00:17:38.807 Test: blockdev nvme passthru rw ...passed 00:17:38.807 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:26:42.553357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.807 [2024-11-20 15:26:42.553373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.553475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.807 [2024-11-20 15:26:42.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.553584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.807 [2024-11-20 15:26:42.553593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.807 [2024-11-20 15:26:42.553698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.807 [2024-11-20 15:26:42.553707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:38.807 passed 00:17:38.807 Test: blockdev nvme admin passthru ...passed 00:17:38.807 Test: blockdev copy ...passed 00:17:38.807 00:17:38.807 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.807 suites 1 1 n/a 0 0 00:17:38.807 tests 23 23 23 0 0 00:17:38.807 asserts 152 152 152 0 n/a 00:17:38.807 00:17:38.807 Elapsed time = 1.078 seconds 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.065 rmmod nvme_tcp 00:17:39.065 rmmod nvme_fabrics 00:17:39.065 rmmod nvme_keyring 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2170559 ']' 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2170559 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2170559 ']' 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2170559 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.065 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170559 00:17:39.323 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:39.323 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:39.323 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170559' 00:17:39.323 killing process with pid 2170559 00:17:39.323 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2170559 00:17:39.323 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2170559 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:39.581 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.582 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.484 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.484 00:17:41.484 real 0m10.806s 00:17:41.484 user 0m13.266s 00:17:41.484 sys 0m5.402s 00:17:41.484 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.484 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.484 ************************************ 00:17:41.484 END TEST nvmf_bdevio_no_huge 00:17:41.484 ************************************ 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 ************************************ 00:17:41.743 START TEST nvmf_tls 00:17:41.743 ************************************ 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.743 * Looking for test storage... 00:17:41.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.743 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.744 --rc genhtml_branch_coverage=1 00:17:41.744 --rc genhtml_function_coverage=1 00:17:41.744 --rc genhtml_legend=1 00:17:41.744 --rc geninfo_all_blocks=1 00:17:41.744 --rc geninfo_unexecuted_blocks=1 00:17:41.744 00:17:41.744 ' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.744 --rc genhtml_branch_coverage=1 00:17:41.744 --rc genhtml_function_coverage=1 00:17:41.744 --rc genhtml_legend=1 00:17:41.744 --rc geninfo_all_blocks=1 00:17:41.744 --rc geninfo_unexecuted_blocks=1 00:17:41.744 00:17:41.744 ' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.744 --rc genhtml_branch_coverage=1 00:17:41.744 --rc genhtml_function_coverage=1 00:17:41.744 --rc genhtml_legend=1 00:17:41.744 --rc geninfo_all_blocks=1 00:17:41.744 --rc geninfo_unexecuted_blocks=1 00:17:41.744 00:17:41.744 ' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.744 --rc genhtml_branch_coverage=1 00:17:41.744 --rc genhtml_function_coverage=1 00:17:41.744 --rc genhtml_legend=1 00:17:41.744 --rc geninfo_all_blocks=1 00:17:41.744 --rc geninfo_unexecuted_blocks=1 00:17:41.744 00:17:41.744 ' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.744 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.002 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.570 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:48.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:48.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:48.571 Found net devices under 0000:86:00.0: cvl_0_0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:48.571 Found net devices under 0000:86:00.1: cvl_0_1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:17:48.571 00:17:48.571 --- 10.0.0.2 ping statistics --- 00:17:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.571 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:17:48.571 00:17:48.571 --- 10.0.0.1 ping statistics --- 00:17:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.571 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2174484 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2174484 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2174484 ']' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.571 [2024-11-20 15:26:51.650626] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:17:48.571 [2024-11-20 15:26:51.650672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.571 [2024-11-20 15:26:51.732086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.571 [2024-11-20 15:26:51.773330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.571 [2024-11-20 15:26:51.773365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.571 [2024-11-20 15:26:51.773372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.571 [2024-11-20 15:26:51.773378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.571 [2024-11-20 15:26:51.773383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.571 [2024-11-20 15:26:51.773934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:48.571 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:48.571 true 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.571 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:48.830 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:48.830 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:48.830 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:49.089 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.090 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:49.349 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:49.608 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.608 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:49.867 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:49.867 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:49.867 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:49.867 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.867 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:50.126 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:50.126 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:50.126 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:50.126 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:50.126 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:50.127 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:50.127 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.127 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:50.127 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.5OoTjj8Hbo 00:17:50.127 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:50.127 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2DMUEMks4h 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5OoTjj8Hbo 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2DMUEMks4h 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.387 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:50.646 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.5OoTjj8Hbo 00:17:50.646 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5OoTjj8Hbo 00:17:50.646 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.905 [2024-11-20 15:26:54.664371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.905 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.164 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.164 [2024-11-20 15:26:55.029302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.164 [2024-11-20 15:26:55.029537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.164 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:51.423 malloc0 00:17:51.423 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:51.681 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5OoTjj8Hbo 00:17:51.940 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:51.940 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.5OoTjj8Hbo 00:18:04.149 Initializing NVMe Controllers 00:18:04.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.149 Initialization complete. Launching workers. 00:18:04.149 ======================================================== 00:18:04.149 Latency(us) 00:18:04.149 Device Information : IOPS MiB/s Average min max 00:18:04.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16300.67 63.67 3926.34 833.44 5219.85 00:18:04.149 ======================================================== 00:18:04.149 Total : 16300.67 63.67 3926.34 833.44 5219.85 00:18:04.149 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OoTjj8Hbo 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5OoTjj8Hbo 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2176976 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2176976 /var/tmp/bdevperf.sock 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2176976 ']' 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.149 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.149 [2024-11-20 15:27:05.962881] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:04.149 [2024-11-20 15:27:05.962934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176976 ] 00:18:04.149 [2024-11-20 15:27:06.037292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.149 [2024-11-20 15:27:06.079903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.149 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.149 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.149 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5OoTjj8Hbo 00:18:04.149 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.149 [2024-11-20 15:27:06.532416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.149 TLSTESTn1 00:18:04.149 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:04.149 Running I/O for 10 seconds... 00:18:05.085 5353.00 IOPS, 20.91 MiB/s [2024-11-20T14:27:09.930Z] 5406.00 IOPS, 21.12 MiB/s [2024-11-20T14:27:10.866Z] 5407.67 IOPS, 21.12 MiB/s [2024-11-20T14:27:11.802Z] 5342.50 IOPS, 20.87 MiB/s [2024-11-20T14:27:12.739Z] 5365.20 IOPS, 20.96 MiB/s [2024-11-20T14:27:14.118Z] 5395.67 IOPS, 21.08 MiB/s [2024-11-20T14:27:15.055Z] 5392.86 IOPS, 21.07 MiB/s [2024-11-20T14:27:15.990Z] 5415.88 IOPS, 21.16 MiB/s [2024-11-20T14:27:16.926Z] 5422.11 IOPS, 21.18 MiB/s [2024-11-20T14:27:16.926Z] 5422.10 IOPS, 21.18 MiB/s 00:18:13.018 Latency(us) 00:18:13.018 [2024-11-20T14:27:16.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.018 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.018 Verification LBA range: start 0x0 length 0x2000 00:18:13.018 TLSTESTn1 : 10.01 5428.03 21.20 0.00 0.00 23546.84 5413.84 22795.13 00:18:13.018 [2024-11-20T14:27:16.926Z] =================================================================================================================== 00:18:13.018 [2024-11-20T14:27:16.926Z] Total : 5428.03 21.20 0.00 0.00 23546.84 5413.84 22795.13 00:18:13.018 { 00:18:13.018 "results": [ 00:18:13.018 { 00:18:13.018 "job": "TLSTESTn1", 00:18:13.018 "core_mask": "0x4", 00:18:13.018 "workload": "verify", 00:18:13.018 "status": "finished", 00:18:13.018 "verify_range": { 00:18:13.018 "start": 0, 00:18:13.018 "length": 8192 00:18:13.018 }, 00:18:13.018 "queue_depth": 128, 00:18:13.019 "io_size": 4096, 00:18:13.019 "runtime": 10.012655, 00:18:13.019 "iops": 5428.030826988446, 00:18:13.019 "mibps": 21.20324541792362, 00:18:13.019 "io_failed": 0, 00:18:13.019 "io_timeout": 0, 00:18:13.019 "avg_latency_us": 23546.844099335452, 00:18:13.019 "min_latency_us": 5413.843478260869, 00:18:13.019 "max_latency_us": 22795.130434782608 00:18:13.019 } 00:18:13.019 ], 00:18:13.019 "core_count": 1 00:18:13.019 } 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2176976 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2176976 ']' 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2176976 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176976 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176976' 00:18:13.019 killing process with pid 2176976 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2176976 00:18:13.019 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.019 00:18:13.019 Latency(us) 00:18:13.019 [2024-11-20T14:27:16.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.019 [2024-11-20T14:27:16.927Z] =================================================================================================================== 00:18:13.019 [2024-11-20T14:27:16.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.019 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2176976 00:18:13.278 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2DMUEMks4h 00:18:13.278 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.278 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2DMUEMks4h 00:18:13.278 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2DMUEMks4h 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2DMUEMks4h 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2179194 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2179194 /var/tmp/bdevperf.sock 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2179194 ']' 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.279 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.279 [2024-11-20 15:27:17.034797] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:13.279 [2024-11-20 15:27:17.034846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179194 ] 00:18:13.279 [2024-11-20 15:27:17.105459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.279 [2024-11-20 15:27:17.142510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.538 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.538 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.538 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2DMUEMks4h 00:18:13.796 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.796 [2024-11-20 15:27:17.634176] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.796 [2024-11-20 15:27:17.645102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.796 [2024-11-20 15:27:17.645599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d170 (107): Transport endpoint is not connected 00:18:13.796 [2024-11-20 15:27:17.646594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d170 (9): Bad file descriptor 00:18:13.796 [2024-11-20 15:27:17.647595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:13.796 [2024-11-20 15:27:17.647604] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.796 [2024-11-20 15:27:17.647611] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:13.796 [2024-11-20 15:27:17.647621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:13.796 request: 00:18:13.796 { 00:18:13.796 "name": "TLSTEST", 00:18:13.796 "trtype": "tcp", 00:18:13.796 "traddr": "10.0.0.2", 00:18:13.796 "adrfam": "ipv4", 00:18:13.796 "trsvcid": "4420", 00:18:13.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.796 "prchk_reftag": false, 00:18:13.796 "prchk_guard": false, 00:18:13.796 "hdgst": false, 00:18:13.796 "ddgst": false, 00:18:13.796 "psk": "key0", 00:18:13.796 "allow_unrecognized_csi": false, 00:18:13.796 "method": "bdev_nvme_attach_controller", 00:18:13.796 "req_id": 1 00:18:13.796 } 00:18:13.796 Got JSON-RPC error response 00:18:13.796 response: 00:18:13.796 { 00:18:13.796 "code": -5, 00:18:13.796 "message": "Input/output error" 00:18:13.796 } 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2179194 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2179194 ']' 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2179194 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.797 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179194 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179194' 00:18:14.056 killing process with pid 2179194 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2179194 00:18:14.056 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.056 00:18:14.056 Latency(us) 00:18:14.056 [2024-11-20T14:27:17.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.056 [2024-11-20T14:27:17.964Z] =================================================================================================================== 00:18:14.056 [2024-11-20T14:27:17.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2179194 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5OoTjj8Hbo 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5OoTjj8Hbo 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5OoTjj8Hbo 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5OoTjj8Hbo 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2179431 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2179431 /var/tmp/bdevperf.sock 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2179431 ']' 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.056 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.057 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.057 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.057 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.057 [2024-11-20 15:27:17.930639] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:14.057 [2024-11-20 15:27:17.930691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179431 ] 00:18:14.315 [2024-11-20 15:27:18.000809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.315 [2024-11-20 15:27:18.038321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.315 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.315 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.315 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5OoTjj8Hbo 00:18:14.574 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:14.833 [2024-11-20 15:27:18.496848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.833 [2024-11-20 15:27:18.507226] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:14.833 [2024-11-20 15:27:18.507247] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:14.833 [2024-11-20 15:27:18.507270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.833 [2024-11-20 15:27:18.508239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f24170 (107): Transport endpoint is not connected 00:18:14.833 [2024-11-20 15:27:18.509234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f24170 (9): Bad file descriptor 00:18:14.833 [2024-11-20 15:27:18.510235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:14.833 [2024-11-20 15:27:18.510244] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.833 [2024-11-20 15:27:18.510251] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:14.833 [2024-11-20 15:27:18.510261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:14.833 request: 00:18:14.833 { 00:18:14.833 "name": "TLSTEST", 00:18:14.833 "trtype": "tcp", 00:18:14.833 "traddr": "10.0.0.2", 00:18:14.833 "adrfam": "ipv4", 00:18:14.833 "trsvcid": "4420", 00:18:14.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:14.833 "prchk_reftag": false, 00:18:14.833 "prchk_guard": false, 00:18:14.833 "hdgst": false, 00:18:14.833 "ddgst": false, 00:18:14.833 "psk": "key0", 00:18:14.833 "allow_unrecognized_csi": false, 00:18:14.833 "method": "bdev_nvme_attach_controller", 00:18:14.833 "req_id": 1 00:18:14.833 } 00:18:14.833 Got JSON-RPC error response 00:18:14.833 response: 00:18:14.833 { 00:18:14.833 "code": -5, 00:18:14.833 "message": "Input/output error" 00:18:14.833 } 00:18:14.833 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2179431 00:18:14.833 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2179431 ']' 00:18:14.833 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2179431 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179431 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179431' 00:18:14.834 killing process with pid 2179431 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2179431 00:18:14.834 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.834 00:18:14.834 Latency(us) 00:18:14.834 [2024-11-20T14:27:18.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.834 [2024-11-20T14:27:18.742Z] =================================================================================================================== 00:18:14.834 [2024-11-20T14:27:18.742Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2179431 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OoTjj8Hbo 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OoTjj8Hbo 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OoTjj8Hbo 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.834 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5OoTjj8Hbo 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2179523 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2179523 /var/tmp/bdevperf.sock 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2179523 ']' 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.093 [2024-11-20 15:27:18.786543] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:15.093 [2024-11-20 15:27:18.786595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179523 ] 00:18:15.093 [2024-11-20 15:27:18.860013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.093 [2024-11-20 15:27:18.899148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.093 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5OoTjj8Hbo 00:18:15.352 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.611 [2024-11-20 15:27:19.366641] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.611 [2024-11-20 15:27:19.371361] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.611 [2024-11-20 15:27:19.371381] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.611 [2024-11-20 15:27:19.371404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.611 [2024-11-20 15:27:19.372084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208c170 (107): Transport endpoint is not connected 00:18:15.611 [2024-11-20 15:27:19.373077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208c170 (9): Bad file descriptor 00:18:15.611 [2024-11-20 15:27:19.374078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:15.611 [2024-11-20 15:27:19.374088] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.611 [2024-11-20 15:27:19.374095] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:15.611 [2024-11-20 15:27:19.374107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:15.611 request: 00:18:15.611 { 00:18:15.611 "name": "TLSTEST", 00:18:15.611 "trtype": "tcp", 00:18:15.611 "traddr": "10.0.0.2", 00:18:15.611 "adrfam": "ipv4", 00:18:15.611 "trsvcid": "4420", 00:18:15.611 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.611 "prchk_reftag": false, 00:18:15.611 "prchk_guard": false, 00:18:15.611 "hdgst": false, 00:18:15.611 "ddgst": false, 00:18:15.611 "psk": "key0", 00:18:15.611 "allow_unrecognized_csi": false, 00:18:15.611 "method": "bdev_nvme_attach_controller", 00:18:15.611 "req_id": 1 00:18:15.611 } 00:18:15.611 Got JSON-RPC error response 00:18:15.611 response: 00:18:15.611 { 00:18:15.611 "code": -5, 00:18:15.611 "message": "Input/output error" 00:18:15.611 } 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2179523 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2179523 ']' 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2179523 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179523 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179523' 00:18:15.611 killing process with pid 2179523 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2179523 00:18:15.611 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.611 00:18:15.611 Latency(us) 00:18:15.611 [2024-11-20T14:27:19.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.611 [2024-11-20T14:27:19.519Z] =================================================================================================================== 00:18:15.611 [2024-11-20T14:27:19.519Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.611 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2179523 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2179682 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2179682 /var/tmp/bdevperf.sock 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2179682 ']' 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.870 [2024-11-20 15:27:19.652563] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:15.871 [2024-11-20 15:27:19.652614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179682 ] 00:18:15.871 [2024-11-20 15:27:19.728271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.871 [2024-11-20 15:27:19.769103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.130 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.130 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.131 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:16.390 [2024-11-20 15:27:20.039488] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:16.390 [2024-11-20 15:27:20.039527] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:16.390 request: 00:18:16.390 { 00:18:16.390 "name": "key0", 00:18:16.390 "path": "", 00:18:16.390 "method": "keyring_file_add_key", 00:18:16.390 "req_id": 1 00:18:16.390 } 00:18:16.390 Got JSON-RPC error response 00:18:16.390 response: 00:18:16.390 { 00:18:16.390 "code": -1, 00:18:16.390 "message": "Operation not permitted" 00:18:16.390 } 00:18:16.390 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.390 [2024-11-20 15:27:20.252138] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.390 [2024-11-20 15:27:20.252172] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:16.390 request: 00:18:16.390 { 00:18:16.390 "name": "TLSTEST", 00:18:16.390 "trtype": "tcp", 00:18:16.390 "traddr": "10.0.0.2", 00:18:16.390 "adrfam": "ipv4", 00:18:16.390 "trsvcid": "4420", 00:18:16.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.390 "prchk_reftag": false, 00:18:16.390 "prchk_guard": false, 00:18:16.390 "hdgst": false, 00:18:16.390 "ddgst": false, 00:18:16.390 "psk": "key0", 00:18:16.390 "allow_unrecognized_csi": false, 00:18:16.390 "method": "bdev_nvme_attach_controller", 00:18:16.391 "req_id": 1 00:18:16.391 } 00:18:16.391 Got JSON-RPC error response 00:18:16.391 response: 00:18:16.391 { 00:18:16.391 "code": -126, 00:18:16.391 "message": "Required key not available" 00:18:16.391 } 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2179682 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2179682 ']' 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2179682 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.391 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179682 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179682' 00:18:16.651 killing process with pid 2179682 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2179682 00:18:16.651 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.651 00:18:16.651 Latency(us) 00:18:16.651 [2024-11-20T14:27:20.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.651 [2024-11-20T14:27:20.559Z] =================================================================================================================== 00:18:16.651 [2024-11-20T14:27:20.559Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2179682 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2174484 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2174484 ']' 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2174484 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174484 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174484' 00:18:16.651 killing process with pid 2174484 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2174484 00:18:16.651 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2174484 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.v4ESxVSNll 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.v4ESxVSNll 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2179929 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2179929 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2179929 ']' 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.911 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.170 [2024-11-20 15:27:20.818405] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:17.170 [2024-11-20 15:27:20.818455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.170 [2024-11-20 15:27:20.895154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.170 [2024-11-20 15:27:20.933391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.170 [2024-11-20 15:27:20.933427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.170 [2024-11-20 15:27:20.933435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.170 [2024-11-20 15:27:20.933440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.170 [2024-11-20 15:27:20.933446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.170 [2024-11-20 15:27:20.934045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.v4ESxVSNll 00:18:17.170 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:17.429 [2024-11-20 15:27:21.241633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.429 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.689 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.948 [2024-11-20 15:27:21.642686] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.948 [2024-11-20 15:27:21.642906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.948 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.948 malloc0 00:18:18.207 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.207 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:18.465 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v4ESxVSNll 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.v4ESxVSNll 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2180187 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2180187 /var/tmp/bdevperf.sock 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2180187 ']' 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.724 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.724 [2024-11-20 15:27:22.499762] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:18.724 [2024-11-20 15:27:22.499809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180187 ] 00:18:18.724 [2024-11-20 15:27:22.568823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.724 [2024-11-20 15:27:22.609669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.982 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.982 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.983 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:19.241 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.241 [2024-11-20 15:27:23.085265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.500 TLSTESTn1 00:18:19.500 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.500 Running I/O for 10 seconds... 00:18:21.813 5113.00 IOPS, 19.97 MiB/s [2024-11-20T14:27:26.288Z] 5312.00 IOPS, 20.75 MiB/s [2024-11-20T14:27:27.672Z] 5369.00 IOPS, 20.97 MiB/s [2024-11-20T14:27:28.608Z] 5288.50 IOPS, 20.66 MiB/s [2024-11-20T14:27:29.546Z] 5230.00 IOPS, 20.43 MiB/s [2024-11-20T14:27:30.484Z] 5197.17 IOPS, 20.30 MiB/s [2024-11-20T14:27:31.420Z] 5152.71 IOPS, 20.13 MiB/s [2024-11-20T14:27:32.383Z] 5123.12 IOPS, 20.01 MiB/s [2024-11-20T14:27:33.383Z] 5054.44 IOPS, 19.74 MiB/s [2024-11-20T14:27:33.383Z] 5045.00 IOPS, 19.71 MiB/s 00:18:29.475 Latency(us) 00:18:29.475 [2024-11-20T14:27:33.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.475 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.475 Verification LBA range: start 0x0 length 0x2000 00:18:29.475 TLSTESTn1 : 10.02 5047.69 19.72 0.00 0.00 25315.58 4815.47 46274.11 00:18:29.475 [2024-11-20T14:27:33.383Z] =================================================================================================================== 00:18:29.475 [2024-11-20T14:27:33.383Z] Total : 5047.69 19.72 0.00 0.00 25315.58 4815.47 46274.11 00:18:29.475 { 00:18:29.475 "results": [ 00:18:29.475 { 00:18:29.475 "job": "TLSTESTn1", 00:18:29.475 "core_mask": "0x4", 00:18:29.475 "workload": "verify", 00:18:29.475 "status": "finished", 00:18:29.475 "verify_range": { 00:18:29.475 "start": 0, 00:18:29.475 "length": 8192 00:18:29.475 }, 00:18:29.475 "queue_depth": 128, 00:18:29.475 "io_size": 4096, 00:18:29.475 "runtime": 10.020037, 00:18:29.475 "iops": 5047.685951658662, 00:18:29.475 "mibps": 19.717523248666648, 00:18:29.475 "io_failed": 0, 00:18:29.475 "io_timeout": 0, 00:18:29.475 "avg_latency_us": 25315.584870136012, 00:18:29.475 "min_latency_us": 4815.471304347826, 00:18:29.475 "max_latency_us": 46274.114782608696 00:18:29.475 } 00:18:29.475 ], 00:18:29.475 "core_count": 1 00:18:29.475 } 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2180187 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2180187 ']' 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2180187 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.475 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180187 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180187' 00:18:29.735 killing process with pid 2180187 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2180187 00:18:29.735 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.735 00:18:29.735 Latency(us) 00:18:29.735 [2024-11-20T14:27:33.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.735 [2024-11-20T14:27:33.643Z] =================================================================================================================== 00:18:29.735 [2024-11-20T14:27:33.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2180187 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.v4ESxVSNll 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v4ESxVSNll 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v4ESxVSNll 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v4ESxVSNll 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.v4ESxVSNll 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2182022 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2182022 /var/tmp/bdevperf.sock 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2182022 ']' 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.735 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.735 [2024-11-20 15:27:33.606689] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:29.735 [2024-11-20 15:27:33.606740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182022 ] 00:18:29.994 [2024-11-20 15:27:33.675725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.994 [2024-11-20 15:27:33.714299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.994 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.994 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.994 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:30.254 [2024-11-20 15:27:33.981578] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.v4ESxVSNll': 0100666 00:18:30.254 [2024-11-20 15:27:33.981611] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.254 request: 00:18:30.254 { 00:18:30.254 "name": "key0", 00:18:30.254 "path": "/tmp/tmp.v4ESxVSNll", 00:18:30.254 "method": "keyring_file_add_key", 00:18:30.254 "req_id": 1 00:18:30.254 } 00:18:30.254 Got JSON-RPC error response 00:18:30.254 response: 00:18:30.254 { 00:18:30.254 "code": -1, 00:18:30.254 "message": "Operation not permitted" 00:18:30.254 } 00:18:30.254 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.513 [2024-11-20 15:27:34.186191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.513 [2024-11-20 15:27:34.186221] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:30.513 request: 00:18:30.513 { 00:18:30.513 "name": "TLSTEST", 00:18:30.513 "trtype": "tcp", 00:18:30.513 "traddr": "10.0.0.2", 00:18:30.513 "adrfam": "ipv4", 00:18:30.513 "trsvcid": "4420", 00:18:30.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.513 "prchk_reftag": false, 00:18:30.513 "prchk_guard": false, 00:18:30.513 "hdgst": false, 00:18:30.513 "ddgst": false, 00:18:30.513 "psk": "key0", 00:18:30.513 "allow_unrecognized_csi": false, 00:18:30.513 "method": "bdev_nvme_attach_controller", 00:18:30.513 "req_id": 1 00:18:30.513 } 00:18:30.513 Got JSON-RPC error response 00:18:30.513 response: 00:18:30.513 { 00:18:30.513 "code": -126, 00:18:30.513 "message": "Required key not available" 00:18:30.513 } 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2182022 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2182022 ']' 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2182022 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182022 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182022' 00:18:30.513 killing process with pid 2182022 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2182022 00:18:30.513 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.513 00:18:30.513 Latency(us) 00:18:30.513 [2024-11-20T14:27:34.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.513 [2024-11-20T14:27:34.421Z] =================================================================================================================== 00:18:30.513 [2024-11-20T14:27:34.421Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2182022 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2179929 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2179929 ']' 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2179929 00:18:30.513 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179929 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179929' 00:18:30.773 killing process with pid 2179929 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2179929 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2179929 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2182262 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2182262 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2182262 ']' 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.773 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.033 [2024-11-20 15:27:34.679549] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:31.033 [2024-11-20 15:27:34.679598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.033 [2024-11-20 15:27:34.760146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.033 [2024-11-20 15:27:34.801402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.033 [2024-11-20 15:27:34.801441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.033 [2024-11-20 15:27:34.801448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.033 [2024-11-20 15:27:34.801455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.033 [2024-11-20 15:27:34.801460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.033 [2024-11-20 15:27:34.801990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.033 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.033 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.033 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.033 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.033 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.v4ESxVSNll 00:18:31.292 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.292 [2024-11-20 15:27:35.114106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.292 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.551 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.810 [2024-11-20 15:27:35.503115] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.810 [2024-11-20 15:27:35.503329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.810 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.810 malloc0 00:18:32.069 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.069 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:32.328 [2024-11-20 15:27:36.112745] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.v4ESxVSNll': 0100666 00:18:32.328 [2024-11-20 15:27:36.112775] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:32.328 request: 00:18:32.328 { 00:18:32.328 "name": "key0", 00:18:32.328 "path": "/tmp/tmp.v4ESxVSNll", 00:18:32.328 "method": "keyring_file_add_key", 00:18:32.328 "req_id": 1 00:18:32.328 } 00:18:32.328 Got JSON-RPC error response 00:18:32.328 response: 00:18:32.328 { 00:18:32.328 "code": -1, 00:18:32.328 "message": "Operation not permitted" 00:18:32.328 } 00:18:32.328 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.588 [2024-11-20 15:27:36.309285] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:32.588 [2024-11-20 15:27:36.309321] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:32.588 request: 00:18:32.588 { 00:18:32.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.588 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.588 "psk": "key0", 00:18:32.588 "method": "nvmf_subsystem_add_host", 00:18:32.588 "req_id": 1 00:18:32.588 } 00:18:32.588 Got JSON-RPC error response 00:18:32.588 response: 00:18:32.588 { 00:18:32.588 "code": -32603, 00:18:32.588 "message": "Internal error" 00:18:32.588 } 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2182262 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2182262 ']' 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2182262 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182262 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182262' 00:18:32.588 killing process with pid 2182262 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2182262 00:18:32.588 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2182262 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.v4ESxVSNll 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2182533 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2182533 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2182533 ']' 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.847 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.847 [2024-11-20 15:27:36.615905] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:32.847 [2024-11-20 15:27:36.615970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.847 [2024-11-20 15:27:36.693692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.847 [2024-11-20 15:27:36.731265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.847 [2024-11-20 15:27:36.731300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.847 [2024-11-20 15:27:36.731308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.847 [2024-11-20 15:27:36.731315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.847 [2024-11-20 15:27:36.731319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.847 [2024-11-20 15:27:36.731886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:33.106 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.v4ESxVSNll 00:18:33.107 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.365 [2024-11-20 15:27:37.052064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.365 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.624 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:33.624 [2024-11-20 15:27:37.453087] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.624 [2024-11-20 15:27:37.453348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.625 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.883 malloc0 00:18:33.883 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.142 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2182793 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2182793 /var/tmp/bdevperf.sock 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2182793 ']' 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.401 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.660 [2024-11-20 15:27:38.319865] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:34.660 [2024-11-20 15:27:38.319915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182793 ] 00:18:34.660 [2024-11-20 15:27:38.396786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.660 [2024-11-20 15:27:38.439613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.661 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.661 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.661 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:34.919 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.178 [2024-11-20 15:27:38.911748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.178 TLSTESTn1 00:18:35.178 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:35.454 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:35.454 "subsystems": [ 00:18:35.454 { 00:18:35.454 "subsystem": "keyring", 00:18:35.454 "config": [ 00:18:35.454 { 00:18:35.454 "method": "keyring_file_add_key", 00:18:35.454 "params": { 00:18:35.454 "name": "key0", 00:18:35.454 "path": "/tmp/tmp.v4ESxVSNll" 00:18:35.454 } 00:18:35.454 } 00:18:35.454 ] 00:18:35.454 }, 00:18:35.454 { 00:18:35.454 "subsystem": "iobuf", 00:18:35.454 "config": [ 00:18:35.454 { 00:18:35.454 "method": "iobuf_set_options", 00:18:35.454 "params": { 00:18:35.454 "small_pool_count": 8192, 00:18:35.454 "large_pool_count": 1024, 00:18:35.454 "small_bufsize": 8192, 00:18:35.454 "large_bufsize": 135168, 00:18:35.454 "enable_numa": false 00:18:35.454 } 00:18:35.454 } 00:18:35.454 ] 00:18:35.454 }, 00:18:35.454 { 00:18:35.454 "subsystem": "sock", 00:18:35.454 "config": [ 00:18:35.454 { 00:18:35.454 "method": "sock_set_default_impl", 00:18:35.454 "params": { 00:18:35.454 "impl_name": "posix" 00:18:35.454 } 00:18:35.454 }, 00:18:35.454 { 00:18:35.454 "method": "sock_impl_set_options", 00:18:35.454 "params": { 00:18:35.454 "impl_name": "ssl", 00:18:35.454 "recv_buf_size": 4096, 00:18:35.454 "send_buf_size": 4096, 00:18:35.454 "enable_recv_pipe": true, 00:18:35.454 "enable_quickack": false, 00:18:35.454 "enable_placement_id": 0, 00:18:35.454 "enable_zerocopy_send_server": true, 00:18:35.454 "enable_zerocopy_send_client": false, 00:18:35.454 "zerocopy_threshold": 0, 00:18:35.454 "tls_version": 0, 00:18:35.454 "enable_ktls": false 00:18:35.454 } 00:18:35.454 }, 00:18:35.454 { 00:18:35.454 "method": "sock_impl_set_options", 00:18:35.454 "params": { 00:18:35.455 "impl_name": "posix", 00:18:35.455 "recv_buf_size": 2097152, 00:18:35.455 "send_buf_size": 2097152, 00:18:35.455 "enable_recv_pipe": true, 00:18:35.455 "enable_quickack": false, 00:18:35.455 "enable_placement_id": 0, 00:18:35.455 "enable_zerocopy_send_server": true, 00:18:35.455 "enable_zerocopy_send_client": false, 00:18:35.455 "zerocopy_threshold": 0, 00:18:35.455 "tls_version": 0, 00:18:35.455 "enable_ktls": false 00:18:35.455 } 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "vmd", 00:18:35.455 "config": [] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "accel", 00:18:35.455 "config": [ 00:18:35.455 { 00:18:35.455 "method": "accel_set_options", 00:18:35.455 "params": { 00:18:35.455 "small_cache_size": 128, 00:18:35.455 "large_cache_size": 16, 00:18:35.455 "task_count": 2048, 00:18:35.455 "sequence_count": 2048, 00:18:35.455 "buf_count": 2048 00:18:35.455 } 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "bdev", 00:18:35.455 "config": [ 00:18:35.455 { 00:18:35.455 "method": "bdev_set_options", 00:18:35.455 "params": { 00:18:35.455 "bdev_io_pool_size": 65535, 00:18:35.455 "bdev_io_cache_size": 256, 00:18:35.455 "bdev_auto_examine": true, 00:18:35.455 "iobuf_small_cache_size": 128, 00:18:35.455 "iobuf_large_cache_size": 16 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_raid_set_options", 00:18:35.455 "params": { 00:18:35.455 "process_window_size_kb": 1024, 00:18:35.455 "process_max_bandwidth_mb_sec": 0 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_iscsi_set_options", 00:18:35.455 "params": { 00:18:35.455 "timeout_sec": 30 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_nvme_set_options", 00:18:35.455 "params": { 00:18:35.455 "action_on_timeout": "none", 00:18:35.455 "timeout_us": 0, 00:18:35.455 "timeout_admin_us": 0, 00:18:35.455 "keep_alive_timeout_ms": 10000, 00:18:35.455 "arbitration_burst": 0, 00:18:35.455 "low_priority_weight": 0, 00:18:35.455 "medium_priority_weight": 0, 00:18:35.455 "high_priority_weight": 0, 00:18:35.455 "nvme_adminq_poll_period_us": 10000, 00:18:35.455 "nvme_ioq_poll_period_us": 0, 00:18:35.455 "io_queue_requests": 0, 00:18:35.455 "delay_cmd_submit": true, 00:18:35.455 "transport_retry_count": 4, 00:18:35.455 "bdev_retry_count": 3, 00:18:35.455 "transport_ack_timeout": 0, 00:18:35.455 "ctrlr_loss_timeout_sec": 0, 00:18:35.455 "reconnect_delay_sec": 0, 00:18:35.455 "fast_io_fail_timeout_sec": 0, 00:18:35.455 "disable_auto_failback": false, 00:18:35.455 "generate_uuids": false, 00:18:35.455 "transport_tos": 0, 00:18:35.455 "nvme_error_stat": false, 00:18:35.455 "rdma_srq_size": 0, 00:18:35.455 "io_path_stat": false, 00:18:35.455 "allow_accel_sequence": false, 00:18:35.455 "rdma_max_cq_size": 0, 00:18:35.455 "rdma_cm_event_timeout_ms": 0, 00:18:35.455 "dhchap_digests": [ 00:18:35.455 "sha256", 00:18:35.455 "sha384", 00:18:35.455 "sha512" 00:18:35.455 ], 00:18:35.455 "dhchap_dhgroups": [ 00:18:35.455 "null", 00:18:35.455 "ffdhe2048", 00:18:35.455 "ffdhe3072", 00:18:35.455 "ffdhe4096", 00:18:35.455 "ffdhe6144", 00:18:35.455 "ffdhe8192" 00:18:35.455 ] 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_nvme_set_hotplug", 00:18:35.455 "params": { 00:18:35.455 "period_us": 100000, 00:18:35.455 "enable": false 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_malloc_create", 00:18:35.455 "params": { 00:18:35.455 "name": "malloc0", 00:18:35.455 "num_blocks": 8192, 00:18:35.455 "block_size": 4096, 00:18:35.455 "physical_block_size": 4096, 00:18:35.455 "uuid": "57d032c0-1ed9-435d-8988-5e029cdaf513", 00:18:35.455 "optimal_io_boundary": 0, 00:18:35.455 "md_size": 0, 00:18:35.455 "dif_type": 0, 00:18:35.455 "dif_is_head_of_md": false, 00:18:35.455 "dif_pi_format": 0 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "bdev_wait_for_examine" 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "nbd", 00:18:35.455 "config": [] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "scheduler", 00:18:35.455 "config": [ 00:18:35.455 { 00:18:35.455 "method": "framework_set_scheduler", 00:18:35.455 "params": { 00:18:35.455 "name": "static" 00:18:35.455 } 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "subsystem": "nvmf", 00:18:35.455 "config": [ 00:18:35.455 { 00:18:35.455 "method": "nvmf_set_config", 00:18:35.455 "params": { 00:18:35.455 "discovery_filter": "match_any", 00:18:35.455 "admin_cmd_passthru": { 00:18:35.455 "identify_ctrlr": false 00:18:35.455 }, 00:18:35.455 "dhchap_digests": [ 00:18:35.455 "sha256", 00:18:35.455 "sha384", 00:18:35.455 "sha512" 00:18:35.455 ], 00:18:35.455 "dhchap_dhgroups": [ 00:18:35.455 "null", 00:18:35.455 "ffdhe2048", 00:18:35.455 "ffdhe3072", 00:18:35.455 "ffdhe4096", 00:18:35.455 "ffdhe6144", 00:18:35.455 "ffdhe8192" 00:18:35.455 ] 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_set_max_subsystems", 00:18:35.455 "params": { 00:18:35.455 "max_subsystems": 1024 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_set_crdt", 00:18:35.455 "params": { 00:18:35.455 "crdt1": 0, 00:18:35.455 "crdt2": 0, 00:18:35.455 "crdt3": 0 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_create_transport", 00:18:35.455 "params": { 00:18:35.455 "trtype": "TCP", 00:18:35.455 "max_queue_depth": 128, 00:18:35.455 "max_io_qpairs_per_ctrlr": 127, 00:18:35.455 "in_capsule_data_size": 4096, 00:18:35.455 "max_io_size": 131072, 00:18:35.455 "io_unit_size": 131072, 00:18:35.455 "max_aq_depth": 128, 00:18:35.455 "num_shared_buffers": 511, 00:18:35.455 "buf_cache_size": 4294967295, 00:18:35.455 "dif_insert_or_strip": false, 00:18:35.455 "zcopy": false, 00:18:35.455 "c2h_success": false, 00:18:35.455 "sock_priority": 0, 00:18:35.455 "abort_timeout_sec": 1, 00:18:35.455 "ack_timeout": 0, 00:18:35.455 "data_wr_pool_size": 0 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_create_subsystem", 00:18:35.455 "params": { 00:18:35.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.455 "allow_any_host": false, 00:18:35.455 "serial_number": "SPDK00000000000001", 00:18:35.455 "model_number": "SPDK bdev Controller", 00:18:35.455 "max_namespaces": 10, 00:18:35.455 "min_cntlid": 1, 00:18:35.455 "max_cntlid": 65519, 00:18:35.455 "ana_reporting": false 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_subsystem_add_host", 00:18:35.455 "params": { 00:18:35.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.455 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.455 "psk": "key0" 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_subsystem_add_ns", 00:18:35.455 "params": { 00:18:35.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.455 "namespace": { 00:18:35.455 "nsid": 1, 00:18:35.455 "bdev_name": "malloc0", 00:18:35.455 "nguid": "57D032C01ED9435D89885E029CDAF513", 00:18:35.455 "uuid": "57d032c0-1ed9-435d-8988-5e029cdaf513", 00:18:35.455 "no_auto_visible": false 00:18:35.455 } 00:18:35.455 } 00:18:35.455 }, 00:18:35.455 { 00:18:35.455 "method": "nvmf_subsystem_add_listener", 00:18:35.455 "params": { 00:18:35.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.455 "listen_address": { 00:18:35.455 "trtype": "TCP", 00:18:35.455 "adrfam": "IPv4", 00:18:35.455 "traddr": "10.0.0.2", 00:18:35.455 "trsvcid": "4420" 00:18:35.455 }, 00:18:35.455 "secure_channel": true 00:18:35.455 } 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 } 00:18:35.455 ] 00:18:35.455 }' 00:18:35.455 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:35.715 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:35.715 "subsystems": [ 00:18:35.715 { 00:18:35.715 "subsystem": "keyring", 00:18:35.715 "config": [ 00:18:35.715 { 00:18:35.715 "method": "keyring_file_add_key", 00:18:35.715 "params": { 00:18:35.715 "name": "key0", 00:18:35.715 "path": "/tmp/tmp.v4ESxVSNll" 00:18:35.715 } 00:18:35.715 } 00:18:35.715 ] 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "subsystem": "iobuf", 00:18:35.715 "config": [ 00:18:35.715 { 00:18:35.715 "method": "iobuf_set_options", 00:18:35.715 "params": { 00:18:35.715 "small_pool_count": 8192, 00:18:35.715 "large_pool_count": 1024, 00:18:35.715 "small_bufsize": 8192, 00:18:35.715 "large_bufsize": 135168, 00:18:35.715 "enable_numa": false 00:18:35.715 } 00:18:35.715 } 00:18:35.715 ] 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "subsystem": "sock", 00:18:35.715 "config": [ 00:18:35.715 { 00:18:35.715 "method": "sock_set_default_impl", 00:18:35.715 "params": { 00:18:35.715 "impl_name": "posix" 00:18:35.715 } 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "method": "sock_impl_set_options", 00:18:35.715 "params": { 00:18:35.715 "impl_name": "ssl", 00:18:35.715 "recv_buf_size": 4096, 00:18:35.715 "send_buf_size": 4096, 00:18:35.715 "enable_recv_pipe": true, 00:18:35.715 "enable_quickack": false, 00:18:35.715 "enable_placement_id": 0, 00:18:35.715 "enable_zerocopy_send_server": true, 00:18:35.715 "enable_zerocopy_send_client": false, 00:18:35.715 "zerocopy_threshold": 0, 00:18:35.715 "tls_version": 0, 00:18:35.715 "enable_ktls": false 00:18:35.715 } 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "method": "sock_impl_set_options", 00:18:35.715 "params": { 00:18:35.715 "impl_name": "posix", 00:18:35.715 "recv_buf_size": 2097152, 00:18:35.715 "send_buf_size": 2097152, 00:18:35.715 "enable_recv_pipe": true, 00:18:35.715 "enable_quickack": false, 00:18:35.715 "enable_placement_id": 0, 00:18:35.715 "enable_zerocopy_send_server": true, 00:18:35.715 "enable_zerocopy_send_client": false, 00:18:35.715 "zerocopy_threshold": 0, 00:18:35.715 "tls_version": 0, 00:18:35.715 "enable_ktls": false 00:18:35.715 } 00:18:35.715 } 00:18:35.715 ] 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "subsystem": "vmd", 00:18:35.715 "config": [] 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "subsystem": "accel", 00:18:35.715 "config": [ 00:18:35.715 { 00:18:35.715 "method": "accel_set_options", 00:18:35.715 "params": { 00:18:35.715 "small_cache_size": 128, 00:18:35.715 "large_cache_size": 16, 00:18:35.715 "task_count": 2048, 00:18:35.715 "sequence_count": 2048, 00:18:35.715 "buf_count": 2048 00:18:35.715 } 00:18:35.715 } 00:18:35.715 ] 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "subsystem": "bdev", 00:18:35.715 "config": [ 00:18:35.715 { 00:18:35.715 "method": "bdev_set_options", 00:18:35.715 "params": { 00:18:35.715 "bdev_io_pool_size": 65535, 00:18:35.715 "bdev_io_cache_size": 256, 00:18:35.715 "bdev_auto_examine": true, 00:18:35.715 "iobuf_small_cache_size": 128, 00:18:35.715 "iobuf_large_cache_size": 16 00:18:35.715 } 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "method": "bdev_raid_set_options", 00:18:35.715 "params": { 00:18:35.715 "process_window_size_kb": 1024, 00:18:35.715 "process_max_bandwidth_mb_sec": 0 00:18:35.715 } 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "method": "bdev_iscsi_set_options", 00:18:35.715 "params": { 00:18:35.715 "timeout_sec": 30 00:18:35.715 } 00:18:35.715 }, 00:18:35.715 { 00:18:35.715 "method": "bdev_nvme_set_options", 00:18:35.715 "params": { 00:18:35.715 "action_on_timeout": "none", 00:18:35.715 "timeout_us": 0, 00:18:35.715 "timeout_admin_us": 0, 00:18:35.715 "keep_alive_timeout_ms": 10000, 00:18:35.715 "arbitration_burst": 0, 00:18:35.715 "low_priority_weight": 0, 00:18:35.715 "medium_priority_weight": 0, 00:18:35.715 "high_priority_weight": 0, 00:18:35.715 "nvme_adminq_poll_period_us": 10000, 00:18:35.715 "nvme_ioq_poll_period_us": 0, 00:18:35.715 "io_queue_requests": 512, 00:18:35.715 "delay_cmd_submit": true, 00:18:35.715 "transport_retry_count": 4, 00:18:35.715 "bdev_retry_count": 3, 00:18:35.715 "transport_ack_timeout": 0, 00:18:35.715 "ctrlr_loss_timeout_sec": 0, 00:18:35.715 "reconnect_delay_sec": 0, 00:18:35.715 "fast_io_fail_timeout_sec": 0, 00:18:35.715 "disable_auto_failback": false, 00:18:35.715 "generate_uuids": false, 00:18:35.715 "transport_tos": 0, 00:18:35.715 "nvme_error_stat": false, 00:18:35.715 "rdma_srq_size": 0, 00:18:35.715 "io_path_stat": false, 00:18:35.715 "allow_accel_sequence": false, 00:18:35.715 "rdma_max_cq_size": 0, 00:18:35.715 "rdma_cm_event_timeout_ms": 0, 00:18:35.715 "dhchap_digests": [ 00:18:35.716 "sha256", 00:18:35.716 "sha384", 00:18:35.716 "sha512" 00:18:35.716 ], 00:18:35.716 "dhchap_dhgroups": [ 00:18:35.716 "null", 00:18:35.716 "ffdhe2048", 00:18:35.716 "ffdhe3072", 00:18:35.716 "ffdhe4096", 00:18:35.716 "ffdhe6144", 00:18:35.716 "ffdhe8192" 00:18:35.716 ] 00:18:35.716 } 00:18:35.716 }, 00:18:35.716 { 00:18:35.716 "method": "bdev_nvme_attach_controller", 00:18:35.716 "params": { 00:18:35.716 "name": "TLSTEST", 00:18:35.716 "trtype": "TCP", 00:18:35.716 "adrfam": "IPv4", 00:18:35.716 "traddr": "10.0.0.2", 00:18:35.716 "trsvcid": "4420", 00:18:35.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.716 "prchk_reftag": false, 00:18:35.716 "prchk_guard": false, 00:18:35.716 "ctrlr_loss_timeout_sec": 0, 00:18:35.716 "reconnect_delay_sec": 0, 00:18:35.716 "fast_io_fail_timeout_sec": 0, 00:18:35.716 "psk": "key0", 00:18:35.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.716 "hdgst": false, 00:18:35.716 "ddgst": false, 00:18:35.716 "multipath": "multipath" 00:18:35.716 } 00:18:35.716 }, 00:18:35.716 { 00:18:35.716 "method": "bdev_nvme_set_hotplug", 00:18:35.716 "params": { 00:18:35.716 "period_us": 100000, 00:18:35.716 "enable": false 00:18:35.716 } 00:18:35.716 }, 00:18:35.716 { 00:18:35.716 "method": "bdev_wait_for_examine" 00:18:35.716 } 00:18:35.716 ] 00:18:35.716 }, 00:18:35.716 { 00:18:35.716 "subsystem": "nbd", 00:18:35.716 "config": [] 00:18:35.716 } 00:18:35.716 ] 00:18:35.716 }' 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2182793 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2182793 ']' 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2182793 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182793 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182793' 00:18:35.716 killing process with pid 2182793 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2182793 00:18:35.716 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.716 00:18:35.716 Latency(us) 00:18:35.716 [2024-11-20T14:27:39.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.716 [2024-11-20T14:27:39.624Z] =================================================================================================================== 00:18:35.716 [2024-11-20T14:27:39.624Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.716 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2182793 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2182533 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2182533 ']' 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2182533 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182533 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182533' 00:18:35.975 killing process with pid 2182533 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2182533 00:18:35.975 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2182533 00:18:36.235 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:36.235 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.235 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.235 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:36.235 "subsystems": [ 00:18:36.235 { 00:18:36.235 "subsystem": "keyring", 00:18:36.235 "config": [ 00:18:36.235 { 00:18:36.235 "method": "keyring_file_add_key", 00:18:36.235 "params": { 00:18:36.235 "name": "key0", 00:18:36.235 "path": "/tmp/tmp.v4ESxVSNll" 00:18:36.235 } 00:18:36.235 } 00:18:36.235 ] 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "subsystem": "iobuf", 00:18:36.235 "config": [ 00:18:36.235 { 00:18:36.235 "method": "iobuf_set_options", 00:18:36.235 "params": { 00:18:36.235 "small_pool_count": 8192, 00:18:36.235 "large_pool_count": 1024, 00:18:36.235 "small_bufsize": 8192, 00:18:36.235 "large_bufsize": 135168, 00:18:36.235 "enable_numa": false 00:18:36.235 } 00:18:36.235 } 00:18:36.235 ] 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "subsystem": "sock", 00:18:36.235 "config": [ 00:18:36.235 { 00:18:36.235 "method": "sock_set_default_impl", 00:18:36.235 "params": { 00:18:36.235 "impl_name": "posix" 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "sock_impl_set_options", 00:18:36.235 "params": { 00:18:36.235 "impl_name": "ssl", 00:18:36.235 "recv_buf_size": 4096, 00:18:36.235 "send_buf_size": 4096, 00:18:36.235 "enable_recv_pipe": true, 00:18:36.235 "enable_quickack": false, 00:18:36.235 "enable_placement_id": 0, 00:18:36.235 "enable_zerocopy_send_server": true, 00:18:36.235 "enable_zerocopy_send_client": false, 00:18:36.235 "zerocopy_threshold": 0, 00:18:36.235 "tls_version": 0, 00:18:36.235 "enable_ktls": false 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "sock_impl_set_options", 00:18:36.235 "params": { 00:18:36.235 "impl_name": "posix", 00:18:36.235 "recv_buf_size": 2097152, 00:18:36.235 "send_buf_size": 2097152, 00:18:36.235 "enable_recv_pipe": true, 00:18:36.235 "enable_quickack": false, 00:18:36.235 "enable_placement_id": 0, 00:18:36.235 "enable_zerocopy_send_server": true, 00:18:36.235 "enable_zerocopy_send_client": false, 00:18:36.235 "zerocopy_threshold": 0, 00:18:36.235 "tls_version": 0, 00:18:36.235 "enable_ktls": false 00:18:36.235 } 00:18:36.235 } 00:18:36.235 ] 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "subsystem": "vmd", 00:18:36.235 "config": [] 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "subsystem": "accel", 00:18:36.235 "config": [ 00:18:36.235 { 00:18:36.235 "method": "accel_set_options", 00:18:36.235 "params": { 00:18:36.235 "small_cache_size": 128, 00:18:36.235 "large_cache_size": 16, 00:18:36.235 "task_count": 2048, 00:18:36.235 "sequence_count": 2048, 00:18:36.235 "buf_count": 2048 00:18:36.235 } 00:18:36.235 } 00:18:36.235 ] 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "subsystem": "bdev", 00:18:36.235 "config": [ 00:18:36.235 { 00:18:36.235 "method": "bdev_set_options", 00:18:36.235 "params": { 00:18:36.235 "bdev_io_pool_size": 65535, 00:18:36.235 "bdev_io_cache_size": 256, 00:18:36.235 "bdev_auto_examine": true, 00:18:36.235 "iobuf_small_cache_size": 128, 00:18:36.235 "iobuf_large_cache_size": 16 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "bdev_raid_set_options", 00:18:36.235 "params": { 00:18:36.235 "process_window_size_kb": 1024, 00:18:36.235 "process_max_bandwidth_mb_sec": 0 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "bdev_iscsi_set_options", 00:18:36.235 "params": { 00:18:36.235 "timeout_sec": 30 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "bdev_nvme_set_options", 00:18:36.235 "params": { 00:18:36.235 "action_on_timeout": "none", 00:18:36.235 "timeout_us": 0, 00:18:36.235 "timeout_admin_us": 0, 00:18:36.235 "keep_alive_timeout_ms": 10000, 00:18:36.235 "arbitration_burst": 0, 00:18:36.235 "low_priority_weight": 0, 00:18:36.235 "medium_priority_weight": 0, 00:18:36.235 "high_priority_weight": 0, 00:18:36.235 "nvme_adminq_poll_period_us": 10000, 00:18:36.235 "nvme_ioq_poll_period_us": 0, 00:18:36.235 "io_queue_requests": 0, 00:18:36.235 "delay_cmd_submit": true, 00:18:36.235 "transport_retry_count": 4, 00:18:36.235 "bdev_retry_count": 3, 00:18:36.235 "transport_ack_timeout": 0, 00:18:36.235 "ctrlr_loss_timeout_sec": 0, 00:18:36.235 "reconnect_delay_sec": 0, 00:18:36.235 "fast_io_fail_timeout_sec": 0, 00:18:36.235 "disable_auto_failback": false, 00:18:36.235 "generate_uuids": false, 00:18:36.235 "transport_tos": 0, 00:18:36.235 "nvme_error_stat": false, 00:18:36.235 "rdma_srq_size": 0, 00:18:36.235 "io_path_stat": false, 00:18:36.235 "allow_accel_sequence": false, 00:18:36.235 "rdma_max_cq_size": 0, 00:18:36.235 "rdma_cm_event_timeout_ms": 0, 00:18:36.235 "dhchap_digests": [ 00:18:36.235 "sha256", 00:18:36.235 "sha384", 00:18:36.235 "sha512" 00:18:36.235 ], 00:18:36.235 "dhchap_dhgroups": [ 00:18:36.235 "null", 00:18:36.235 "ffdhe2048", 00:18:36.235 "ffdhe3072", 00:18:36.235 "ffdhe4096", 00:18:36.235 "ffdhe6144", 00:18:36.235 "ffdhe8192" 00:18:36.235 ] 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "bdev_nvme_set_hotplug", 00:18:36.235 "params": { 00:18:36.235 "period_us": 100000, 00:18:36.235 "enable": false 00:18:36.235 } 00:18:36.235 }, 00:18:36.235 { 00:18:36.235 "method": "bdev_malloc_create", 00:18:36.235 "params": { 00:18:36.235 "name": "malloc0", 00:18:36.235 "num_blocks": 8192, 00:18:36.235 "block_size": 4096, 00:18:36.235 "physical_block_size": 4096, 00:18:36.235 "uuid": "57d032c0-1ed9-435d-8988-5e029cdaf513", 00:18:36.235 "optimal_io_boundary": 0, 00:18:36.235 "md_size": 0, 00:18:36.235 "dif_type": 0, 00:18:36.235 "dif_is_head_of_md": false, 00:18:36.236 "dif_pi_format": 0 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "bdev_wait_for_examine" 00:18:36.236 } 00:18:36.236 ] 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "subsystem": "nbd", 00:18:36.236 "config": [] 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "subsystem": "scheduler", 00:18:36.236 "config": [ 00:18:36.236 { 00:18:36.236 "method": "framework_set_scheduler", 00:18:36.236 "params": { 00:18:36.236 "name": "static" 00:18:36.236 } 00:18:36.236 } 00:18:36.236 ] 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "subsystem": "nvmf", 00:18:36.236 "config": [ 00:18:36.236 { 00:18:36.236 "method": "nvmf_set_config", 00:18:36.236 "params": { 00:18:36.236 "discovery_filter": "match_any", 00:18:36.236 "admin_cmd_passthru": { 00:18:36.236 "identify_ctrlr": false 00:18:36.236 }, 00:18:36.236 "dhchap_digests": [ 00:18:36.236 "sha256", 00:18:36.236 "sha384", 00:18:36.236 "sha512" 00:18:36.236 ], 00:18:36.236 "dhchap_dhgroups": [ 00:18:36.236 "null", 00:18:36.236 "ffdhe2048", 00:18:36.236 "ffdhe3072", 00:18:36.236 "ffdhe4096", 00:18:36.236 "ffdhe6144", 00:18:36.236 "ffdhe8192" 00:18:36.236 ] 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_set_max_subsystems", 00:18:36.236 "params": { 00:18:36.236 "max_subsystems": 1024 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_set_crdt", 00:18:36.236 "params": { 00:18:36.236 "crdt1": 0, 00:18:36.236 "crdt2": 0, 00:18:36.236 "crdt3": 0 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_create_transport", 00:18:36.236 "params": { 00:18:36.236 "trtype": "TCP", 00:18:36.236 "max_queue_depth": 128, 00:18:36.236 "max_io_qpairs_per_ctrlr": 127, 00:18:36.236 "in_capsule_data_size": 4096, 00:18:36.236 "max_io_size": 131072, 00:18:36.236 "io_unit_size": 131072, 00:18:36.236 "max_aq_depth": 128, 00:18:36.236 "num_shared_buffers": 511, 00:18:36.236 "buf_cache_size": 4294967295, 00:18:36.236 "dif_insert_or_strip": false, 00:18:36.236 "zcopy": false, 00:18:36.236 "c2h_success": false, 00:18:36.236 "sock_priority": 0, 00:18:36.236 "abort_timeout_sec": 1, 00:18:36.236 "ack_timeout": 0, 00:18:36.236 "data_wr_pool_size": 0 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_create_subsystem", 00:18:36.236 "params": { 00:18:36.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.236 "allow_any_host": false, 00:18:36.236 "serial_number": "SPDK00000000000001", 00:18:36.236 "model_number": "SPDK bdev Controller", 00:18:36.236 "max_namespaces": 10, 00:18:36.236 "min_cntlid": 1, 00:18:36.236 "max_cntlid": 65519, 00:18:36.236 "ana_reporting": false 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_subsystem_add_host", 00:18:36.236 "params": { 00:18:36.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.236 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.236 "psk": "key0" 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_subsystem_add_ns", 00:18:36.236 "params": { 00:18:36.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.236 "namespace": { 00:18:36.236 "nsid": 1, 00:18:36.236 "bdev_name": "malloc0", 00:18:36.236 "nguid": "57D032C01ED9435D89885E029CDAF513", 00:18:36.236 "uuid": "57d032c0-1ed9-435d-8988-5e029cdaf513", 00:18:36.236 "no_auto_visible": false 00:18:36.236 } 00:18:36.236 } 00:18:36.236 }, 00:18:36.236 { 00:18:36.236 "method": "nvmf_subsystem_add_listener", 00:18:36.236 "params": { 00:18:36.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.236 "listen_address": { 00:18:36.236 "trtype": "TCP", 00:18:36.236 "adrfam": "IPv4", 00:18:36.236 "traddr": "10.0.0.2", 00:18:36.236 "trsvcid": "4420" 00:18:36.236 }, 00:18:36.236 "secure_channel": true 00:18:36.236 } 00:18:36.236 } 00:18:36.236 ] 00:18:36.236 } 00:18:36.236 ] 00:18:36.236 }' 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2183177 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2183177 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2183177 ']' 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.236 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.236 [2024-11-20 15:27:40.011683] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:36.236 [2024-11-20 15:27:40.011729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.236 [2024-11-20 15:27:40.095065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.236 [2024-11-20 15:27:40.136373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.236 [2024-11-20 15:27:40.136407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.236 [2024-11-20 15:27:40.136414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.236 [2024-11-20 15:27:40.136420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.236 [2024-11-20 15:27:40.136425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.236 [2024-11-20 15:27:40.137032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.495 [2024-11-20 15:27:40.349705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.495 [2024-11-20 15:27:40.381731] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.495 [2024-11-20 15:27:40.381970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2183289 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2183289 /var/tmp/bdevperf.sock 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2183289 ']' 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.063 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:37.063 "subsystems": [ 00:18:37.063 { 00:18:37.063 "subsystem": "keyring", 00:18:37.063 "config": [ 00:18:37.063 { 00:18:37.064 "method": "keyring_file_add_key", 00:18:37.064 "params": { 00:18:37.064 "name": "key0", 00:18:37.064 "path": "/tmp/tmp.v4ESxVSNll" 00:18:37.064 } 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "iobuf", 00:18:37.064 "config": [ 00:18:37.064 { 00:18:37.064 "method": "iobuf_set_options", 00:18:37.064 "params": { 00:18:37.064 "small_pool_count": 8192, 00:18:37.064 "large_pool_count": 1024, 00:18:37.064 "small_bufsize": 8192, 00:18:37.064 "large_bufsize": 135168, 00:18:37.064 "enable_numa": false 00:18:37.064 } 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "sock", 00:18:37.064 "config": [ 00:18:37.064 { 00:18:37.064 "method": "sock_set_default_impl", 00:18:37.064 "params": { 00:18:37.064 "impl_name": "posix" 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "sock_impl_set_options", 00:18:37.064 "params": { 00:18:37.064 "impl_name": "ssl", 00:18:37.064 "recv_buf_size": 4096, 00:18:37.064 "send_buf_size": 4096, 00:18:37.064 "enable_recv_pipe": true, 00:18:37.064 "enable_quickack": false, 00:18:37.064 "enable_placement_id": 0, 00:18:37.064 "enable_zerocopy_send_server": true, 00:18:37.064 "enable_zerocopy_send_client": false, 00:18:37.064 "zerocopy_threshold": 0, 00:18:37.064 "tls_version": 0, 00:18:37.064 "enable_ktls": false 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "sock_impl_set_options", 00:18:37.064 "params": { 00:18:37.064 "impl_name": "posix", 00:18:37.064 "recv_buf_size": 2097152, 00:18:37.064 "send_buf_size": 2097152, 00:18:37.064 "enable_recv_pipe": true, 00:18:37.064 "enable_quickack": false, 00:18:37.064 "enable_placement_id": 0, 00:18:37.064 "enable_zerocopy_send_server": true, 00:18:37.064 "enable_zerocopy_send_client": false, 00:18:37.064 "zerocopy_threshold": 0, 00:18:37.064 "tls_version": 0, 00:18:37.064 "enable_ktls": false 00:18:37.064 } 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "vmd", 00:18:37.064 "config": [] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "accel", 00:18:37.064 "config": [ 00:18:37.064 { 00:18:37.064 "method": "accel_set_options", 00:18:37.064 "params": { 00:18:37.064 "small_cache_size": 128, 00:18:37.064 "large_cache_size": 16, 00:18:37.064 "task_count": 2048, 00:18:37.064 "sequence_count": 2048, 00:18:37.064 "buf_count": 2048 00:18:37.064 } 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "bdev", 00:18:37.064 "config": [ 00:18:37.064 { 00:18:37.064 "method": "bdev_set_options", 00:18:37.064 "params": { 00:18:37.064 "bdev_io_pool_size": 65535, 00:18:37.064 "bdev_io_cache_size": 256, 00:18:37.064 "bdev_auto_examine": true, 00:18:37.064 "iobuf_small_cache_size": 128, 00:18:37.064 "iobuf_large_cache_size": 16 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_raid_set_options", 00:18:37.064 "params": { 00:18:37.064 "process_window_size_kb": 1024, 00:18:37.064 "process_max_bandwidth_mb_sec": 0 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_iscsi_set_options", 00:18:37.064 "params": { 00:18:37.064 "timeout_sec": 30 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_nvme_set_options", 00:18:37.064 "params": { 00:18:37.064 "action_on_timeout": "none", 00:18:37.064 "timeout_us": 0, 00:18:37.064 "timeout_admin_us": 0, 00:18:37.064 "keep_alive_timeout_ms": 10000, 00:18:37.064 "arbitration_burst": 0, 00:18:37.064 "low_priority_weight": 0, 00:18:37.064 "medium_priority_weight": 0, 00:18:37.064 "high_priority_weight": 0, 00:18:37.064 "nvme_adminq_poll_period_us": 10000, 00:18:37.064 "nvme_ioq_poll_period_us": 0, 00:18:37.064 "io_queue_requests": 512, 00:18:37.064 "delay_cmd_submit": true, 00:18:37.064 "transport_retry_count": 4, 00:18:37.064 "bdev_retry_count": 3, 00:18:37.064 "transport_ack_timeout": 0, 00:18:37.064 "ctrlr_loss_timeout_sec": 0, 00:18:37.064 "reconnect_delay_sec": 0, 00:18:37.064 "fast_io_fail_timeout_sec": 0, 00:18:37.064 "disable_auto_failback": false, 00:18:37.064 "generate_uuids": false, 00:18:37.064 "transport_tos": 0, 00:18:37.064 "nvme_error_stat": false, 00:18:37.064 "rdma_srq_size": 0, 00:18:37.064 "io_path_stat": false, 00:18:37.064 "allow_accel_sequence": false, 00:18:37.064 "rdma_max_cq_size": 0, 00:18:37.064 "rdma_cm_event_timeout_ms": 0, 00:18:37.064 "dhchap_digests": [ 00:18:37.064 "sha256", 00:18:37.064 "sha384", 00:18:37.064 "sha512" 00:18:37.064 ], 00:18:37.064 "dhchap_dhgroups": [ 00:18:37.064 "null", 00:18:37.064 "ffdhe2048", 00:18:37.064 "ffdhe3072", 00:18:37.064 "ffdhe4096", 00:18:37.064 "ffdhe6144", 00:18:37.064 "ffdhe8192" 00:18:37.064 ] 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_nvme_attach_controller", 00:18:37.064 "params": { 00:18:37.064 "name": "TLSTEST", 00:18:37.064 "trtype": "TCP", 00:18:37.064 "adrfam": "IPv4", 00:18:37.064 "traddr": "10.0.0.2", 00:18:37.064 "trsvcid": "4420", 00:18:37.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.064 "prchk_reftag": false, 00:18:37.064 "prchk_guard": false, 00:18:37.064 "ctrlr_loss_timeout_sec": 0, 00:18:37.064 "reconnect_delay_sec": 0, 00:18:37.064 "fast_io_fail_timeout_sec": 0, 00:18:37.064 "psk": "key0", 00:18:37.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.064 "hdgst": false, 00:18:37.064 "ddgst": false, 00:18:37.064 "multipath": "multipath" 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_nvme_set_hotplug", 00:18:37.064 "params": { 00:18:37.064 "period_us": 100000, 00:18:37.064 "enable": false 00:18:37.064 } 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "method": "bdev_wait_for_examine" 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "subsystem": "nbd", 00:18:37.064 "config": [] 00:18:37.064 } 00:18:37.064 ] 00:18:37.064 }' 00:18:37.064 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.064 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.064 [2024-11-20 15:27:40.942322] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:37.064 [2024-11-20 15:27:40.942368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183289 ] 00:18:37.324 [2024-11-20 15:27:41.015978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.324 [2024-11-20 15:27:41.056701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.324 [2024-11-20 15:27:41.209673] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.891 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.891 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.891 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:38.151 Running I/O for 10 seconds... 00:18:40.022 5454.00 IOPS, 21.30 MiB/s [2024-11-20T14:27:45.305Z] 5567.50 IOPS, 21.75 MiB/s [2024-11-20T14:27:45.873Z] 5562.00 IOPS, 21.73 MiB/s [2024-11-20T14:27:47.253Z] 5591.00 IOPS, 21.84 MiB/s [2024-11-20T14:27:48.188Z] 5617.20 IOPS, 21.94 MiB/s [2024-11-20T14:27:49.125Z] 5566.00 IOPS, 21.74 MiB/s [2024-11-20T14:27:50.082Z] 5578.14 IOPS, 21.79 MiB/s [2024-11-20T14:27:51.019Z] 5584.00 IOPS, 21.81 MiB/s [2024-11-20T14:27:51.956Z] 5570.67 IOPS, 21.76 MiB/s [2024-11-20T14:27:51.956Z] 5573.30 IOPS, 21.77 MiB/s 00:18:48.048 Latency(us) 00:18:48.048 [2024-11-20T14:27:51.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.048 Verification LBA range: start 0x0 length 0x2000 00:18:48.048 TLSTESTn1 : 10.02 5572.14 21.77 0.00 0.00 22929.64 6610.59 30317.52 00:18:48.048 [2024-11-20T14:27:51.956Z] =================================================================================================================== 00:18:48.048 [2024-11-20T14:27:51.956Z] Total : 5572.14 21.77 0.00 0.00 22929.64 6610.59 30317.52 00:18:48.048 { 00:18:48.048 "results": [ 00:18:48.048 { 00:18:48.048 "job": "TLSTESTn1", 00:18:48.048 "core_mask": "0x4", 00:18:48.048 "workload": "verify", 00:18:48.048 "status": "finished", 00:18:48.048 "verify_range": { 00:18:48.048 "start": 0, 00:18:48.048 "length": 8192 00:18:48.048 }, 00:18:48.048 "queue_depth": 128, 00:18:48.048 "io_size": 4096, 00:18:48.048 "runtime": 10.024874, 00:18:48.048 "iops": 5572.139859313943, 00:18:48.048 "mibps": 21.76617132544509, 00:18:48.048 "io_failed": 0, 00:18:48.048 "io_timeout": 0, 00:18:48.048 "avg_latency_us": 22929.643791466242, 00:18:48.048 "min_latency_us": 6610.587826086957, 00:18:48.048 "max_latency_us": 30317.52347826087 00:18:48.048 } 00:18:48.048 ], 00:18:48.048 "core_count": 1 00:18:48.048 } 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2183289 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2183289 ']' 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2183289 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.048 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183289 00:18:48.308 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:48.308 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:48.308 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183289' 00:18:48.308 killing process with pid 2183289 00:18:48.308 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2183289 00:18:48.308 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.308 00:18:48.308 Latency(us) 00:18:48.308 [2024-11-20T14:27:52.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.308 [2024-11-20T14:27:52.216Z] =================================================================================================================== 00:18:48.308 [2024-11-20T14:27:52.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.308 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2183289 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2183177 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2183177 ']' 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2183177 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183177 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183177' 00:18:48.308 killing process with pid 2183177 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2183177 00:18:48.308 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2183177 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2185136 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2185136 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2185136 ']' 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.568 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.568 [2024-11-20 15:27:52.414934] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:48.568 [2024-11-20 15:27:52.414987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.827 [2024-11-20 15:27:52.495001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.827 [2024-11-20 15:27:52.535665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.827 [2024-11-20 15:27:52.535702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.827 [2024-11-20 15:27:52.535709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.827 [2024-11-20 15:27:52.535718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.827 [2024-11-20 15:27:52.535723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.827 [2024-11-20 15:27:52.536296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.v4ESxVSNll 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.v4ESxVSNll 00:18:48.827 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.086 [2024-11-20 15:27:52.843508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.086 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.345 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.345 [2024-11-20 15:27:53.208479] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.345 [2024-11-20 15:27:53.208706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.345 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.604 malloc0 00:18:49.604 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.863 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2185494 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2185494 /var/tmp/bdevperf.sock 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2185494 ']' 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.122 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.123 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.123 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.123 [2024-11-20 15:27:53.993851] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:50.123 [2024-11-20 15:27:53.993898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185494 ] 00:18:50.383 [2024-11-20 15:27:54.070424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.383 [2024-11-20 15:27:54.113493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.383 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.383 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.383 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:50.642 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.902 [2024-11-20 15:27:54.554250] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.902 nvme0n1 00:18:50.902 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.902 Running I/O for 1 seconds... 00:18:51.840 5334.00 IOPS, 20.84 MiB/s 00:18:51.840 Latency(us) 00:18:51.840 [2024-11-20T14:27:55.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.840 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.840 Verification LBA range: start 0x0 length 0x2000 00:18:51.840 nvme0n1 : 1.01 5392.11 21.06 0.00 0.00 23575.90 5385.35 23592.96 00:18:51.840 [2024-11-20T14:27:55.748Z] =================================================================================================================== 00:18:51.840 [2024-11-20T14:27:55.748Z] Total : 5392.11 21.06 0.00 0.00 23575.90 5385.35 23592.96 00:18:52.100 { 00:18:52.100 "results": [ 00:18:52.100 { 00:18:52.100 "job": "nvme0n1", 00:18:52.100 "core_mask": "0x2", 00:18:52.100 "workload": "verify", 00:18:52.100 "status": "finished", 00:18:52.100 "verify_range": { 00:18:52.100 "start": 0, 00:18:52.100 "length": 8192 00:18:52.100 }, 00:18:52.100 "queue_depth": 128, 00:18:52.100 "io_size": 4096, 00:18:52.100 "runtime": 1.012962, 00:18:52.100 "iops": 5392.107502551922, 00:18:52.100 "mibps": 21.062919931843446, 00:18:52.100 "io_failed": 0, 00:18:52.100 "io_timeout": 0, 00:18:52.100 "avg_latency_us": 23575.901212806268, 00:18:52.100 "min_latency_us": 5385.3495652173915, 00:18:52.100 "max_latency_us": 23592.96 00:18:52.100 } 00:18:52.100 ], 00:18:52.100 "core_count": 1 00:18:52.100 } 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2185494 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2185494 ']' 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2185494 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185494 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185494' 00:18:52.100 killing process with pid 2185494 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2185494 00:18:52.100 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.100 00:18:52.100 Latency(us) 00:18:52.100 [2024-11-20T14:27:56.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.100 [2024-11-20T14:27:56.008Z] =================================================================================================================== 00:18:52.100 [2024-11-20T14:27:56.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2185494 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2185136 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2185136 ']' 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2185136 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.100 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185136 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185136' 00:18:52.360 killing process with pid 2185136 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2185136 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2185136 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2185851 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2185851 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2185851 ']' 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.360 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.360 [2024-11-20 15:27:56.254253] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:52.360 [2024-11-20 15:27:56.254303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.619 [2024-11-20 15:27:56.332283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.619 [2024-11-20 15:27:56.370730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.619 [2024-11-20 15:27:56.370765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.619 [2024-11-20 15:27:56.370775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.619 [2024-11-20 15:27:56.370781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.619 [2024-11-20 15:27:56.370787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.619 [2024-11-20 15:27:56.371369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.619 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.619 [2024-11-20 15:27:56.517991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.878 malloc0 00:18:52.878 [2024-11-20 15:27:56.546046] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.878 [2024-11-20 15:27:56.546262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2185883 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2185883 /var/tmp/bdevperf.sock 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2185883 ']' 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.878 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.878 [2024-11-20 15:27:56.622432] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:52.878 [2024-11-20 15:27:56.622474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185883 ] 00:18:52.878 [2024-11-20 15:27:56.699321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.878 [2024-11-20 15:27:56.741542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.137 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.137 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.137 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.v4ESxVSNll 00:18:53.137 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.396 [2024-11-20 15:27:57.189312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.396 nvme0n1 00:18:53.396 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.655 Running I/O for 1 seconds... 00:18:54.590 5320.00 IOPS, 20.78 MiB/s 00:18:54.590 Latency(us) 00:18:54.590 [2024-11-20T14:27:58.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.590 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:54.590 Verification LBA range: start 0x0 length 0x2000 00:18:54.590 nvme0n1 : 1.02 5348.79 20.89 0.00 0.00 23745.45 5014.93 28493.91 00:18:54.590 [2024-11-20T14:27:58.498Z] =================================================================================================================== 00:18:54.590 [2024-11-20T14:27:58.498Z] Total : 5348.79 20.89 0.00 0.00 23745.45 5014.93 28493.91 00:18:54.590 { 00:18:54.590 "results": [ 00:18:54.590 { 00:18:54.590 "job": "nvme0n1", 00:18:54.590 "core_mask": "0x2", 00:18:54.590 "workload": "verify", 00:18:54.590 "status": "finished", 00:18:54.590 "verify_range": { 00:18:54.590 "start": 0, 00:18:54.590 "length": 8192 00:18:54.590 }, 00:18:54.590 "queue_depth": 128, 00:18:54.590 "io_size": 4096, 00:18:54.590 "runtime": 1.018735, 00:18:54.590 "iops": 5348.790411637963, 00:18:54.590 "mibps": 20.893712545460794, 00:18:54.590 "io_failed": 0, 00:18:54.590 "io_timeout": 0, 00:18:54.590 "avg_latency_us": 23745.446098606048, 00:18:54.590 "min_latency_us": 5014.928695652174, 00:18:54.590 "max_latency_us": 28493.91304347826 00:18:54.590 } 00:18:54.590 ], 00:18:54.590 "core_count": 1 00:18:54.590 } 00:18:54.590 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:54.590 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.590 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.849 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.849 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:54.849 "subsystems": [ 00:18:54.849 { 00:18:54.849 "subsystem": "keyring", 00:18:54.849 "config": [ 00:18:54.849 { 00:18:54.849 "method": "keyring_file_add_key", 00:18:54.849 "params": { 00:18:54.849 "name": "key0", 00:18:54.849 "path": "/tmp/tmp.v4ESxVSNll" 00:18:54.849 } 00:18:54.849 } 00:18:54.849 ] 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "subsystem": "iobuf", 00:18:54.849 "config": [ 00:18:54.849 { 00:18:54.849 "method": "iobuf_set_options", 00:18:54.849 "params": { 00:18:54.849 "small_pool_count": 8192, 00:18:54.849 "large_pool_count": 1024, 00:18:54.849 "small_bufsize": 8192, 00:18:54.849 "large_bufsize": 135168, 00:18:54.849 "enable_numa": false 00:18:54.849 } 00:18:54.849 } 00:18:54.849 ] 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "subsystem": "sock", 00:18:54.849 "config": [ 00:18:54.849 { 00:18:54.849 "method": "sock_set_default_impl", 00:18:54.849 "params": { 00:18:54.849 "impl_name": "posix" 00:18:54.849 } 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "method": "sock_impl_set_options", 00:18:54.849 "params": { 00:18:54.849 "impl_name": "ssl", 00:18:54.849 "recv_buf_size": 4096, 00:18:54.849 "send_buf_size": 4096, 00:18:54.849 "enable_recv_pipe": true, 00:18:54.849 "enable_quickack": false, 00:18:54.849 "enable_placement_id": 0, 00:18:54.849 "enable_zerocopy_send_server": true, 00:18:54.849 "enable_zerocopy_send_client": false, 00:18:54.849 "zerocopy_threshold": 0, 00:18:54.849 "tls_version": 0, 00:18:54.849 "enable_ktls": false 00:18:54.849 } 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "method": "sock_impl_set_options", 00:18:54.849 "params": { 00:18:54.849 "impl_name": "posix", 00:18:54.849 "recv_buf_size": 2097152, 00:18:54.849 "send_buf_size": 2097152, 00:18:54.849 "enable_recv_pipe": true, 00:18:54.849 "enable_quickack": false, 00:18:54.849 "enable_placement_id": 0, 00:18:54.849 "enable_zerocopy_send_server": true, 00:18:54.849 "enable_zerocopy_send_client": false, 00:18:54.849 "zerocopy_threshold": 0, 00:18:54.849 "tls_version": 0, 00:18:54.849 "enable_ktls": false 00:18:54.849 } 00:18:54.849 } 00:18:54.849 ] 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "subsystem": "vmd", 00:18:54.849 "config": [] 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "subsystem": "accel", 00:18:54.849 "config": [ 00:18:54.849 { 00:18:54.849 "method": "accel_set_options", 00:18:54.849 "params": { 00:18:54.849 "small_cache_size": 128, 00:18:54.849 "large_cache_size": 16, 00:18:54.849 "task_count": 2048, 00:18:54.849 "sequence_count": 2048, 00:18:54.849 "buf_count": 2048 00:18:54.849 } 00:18:54.849 } 00:18:54.849 ] 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "subsystem": "bdev", 00:18:54.849 "config": [ 00:18:54.849 { 00:18:54.849 "method": "bdev_set_options", 00:18:54.849 "params": { 00:18:54.849 "bdev_io_pool_size": 65535, 00:18:54.849 "bdev_io_cache_size": 256, 00:18:54.849 "bdev_auto_examine": true, 00:18:54.849 "iobuf_small_cache_size": 128, 00:18:54.849 "iobuf_large_cache_size": 16 00:18:54.849 } 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "method": "bdev_raid_set_options", 00:18:54.849 "params": { 00:18:54.849 "process_window_size_kb": 1024, 00:18:54.849 "process_max_bandwidth_mb_sec": 0 00:18:54.849 } 00:18:54.849 }, 00:18:54.849 { 00:18:54.849 "method": "bdev_iscsi_set_options", 00:18:54.849 "params": { 00:18:54.850 "timeout_sec": 30 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "bdev_nvme_set_options", 00:18:54.850 "params": { 00:18:54.850 "action_on_timeout": "none", 00:18:54.850 "timeout_us": 0, 00:18:54.850 "timeout_admin_us": 0, 00:18:54.850 "keep_alive_timeout_ms": 10000, 00:18:54.850 "arbitration_burst": 0, 00:18:54.850 "low_priority_weight": 0, 00:18:54.850 "medium_priority_weight": 0, 00:18:54.850 "high_priority_weight": 0, 00:18:54.850 "nvme_adminq_poll_period_us": 10000, 00:18:54.850 "nvme_ioq_poll_period_us": 0, 00:18:54.850 "io_queue_requests": 0, 00:18:54.850 "delay_cmd_submit": true, 00:18:54.850 "transport_retry_count": 4, 00:18:54.850 "bdev_retry_count": 3, 00:18:54.850 "transport_ack_timeout": 0, 00:18:54.850 "ctrlr_loss_timeout_sec": 0, 00:18:54.850 "reconnect_delay_sec": 0, 00:18:54.850 "fast_io_fail_timeout_sec": 0, 00:18:54.850 "disable_auto_failback": false, 00:18:54.850 "generate_uuids": false, 00:18:54.850 "transport_tos": 0, 00:18:54.850 "nvme_error_stat": false, 00:18:54.850 "rdma_srq_size": 0, 00:18:54.850 "io_path_stat": false, 00:18:54.850 "allow_accel_sequence": false, 00:18:54.850 "rdma_max_cq_size": 0, 00:18:54.850 "rdma_cm_event_timeout_ms": 0, 00:18:54.850 "dhchap_digests": [ 00:18:54.850 "sha256", 00:18:54.850 "sha384", 00:18:54.850 "sha512" 00:18:54.850 ], 00:18:54.850 "dhchap_dhgroups": [ 00:18:54.850 "null", 00:18:54.850 "ffdhe2048", 00:18:54.850 "ffdhe3072", 00:18:54.850 "ffdhe4096", 00:18:54.850 "ffdhe6144", 00:18:54.850 "ffdhe8192" 00:18:54.850 ] 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "bdev_nvme_set_hotplug", 00:18:54.850 "params": { 00:18:54.850 "period_us": 100000, 00:18:54.850 "enable": false 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "bdev_malloc_create", 00:18:54.850 "params": { 00:18:54.850 "name": "malloc0", 00:18:54.850 "num_blocks": 8192, 00:18:54.850 "block_size": 4096, 00:18:54.850 "physical_block_size": 4096, 00:18:54.850 "uuid": "c4a24b5c-d49f-463d-bd15-5f1a9a95d4da", 00:18:54.850 "optimal_io_boundary": 0, 00:18:54.850 "md_size": 0, 00:18:54.850 "dif_type": 0, 00:18:54.850 "dif_is_head_of_md": false, 00:18:54.850 "dif_pi_format": 0 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "bdev_wait_for_examine" 00:18:54.850 } 00:18:54.850 ] 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "subsystem": "nbd", 00:18:54.850 "config": [] 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "subsystem": "scheduler", 00:18:54.850 "config": [ 00:18:54.850 { 00:18:54.850 "method": "framework_set_scheduler", 00:18:54.850 "params": { 00:18:54.850 "name": "static" 00:18:54.850 } 00:18:54.850 } 00:18:54.850 ] 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "subsystem": "nvmf", 00:18:54.850 "config": [ 00:18:54.850 { 00:18:54.850 "method": "nvmf_set_config", 00:18:54.850 "params": { 00:18:54.850 "discovery_filter": "match_any", 00:18:54.850 "admin_cmd_passthru": { 00:18:54.850 "identify_ctrlr": false 00:18:54.850 }, 00:18:54.850 "dhchap_digests": [ 00:18:54.850 "sha256", 00:18:54.850 "sha384", 00:18:54.850 "sha512" 00:18:54.850 ], 00:18:54.850 "dhchap_dhgroups": [ 00:18:54.850 "null", 00:18:54.850 "ffdhe2048", 00:18:54.850 "ffdhe3072", 00:18:54.850 "ffdhe4096", 00:18:54.850 "ffdhe6144", 00:18:54.850 "ffdhe8192" 00:18:54.850 ] 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_set_max_subsystems", 00:18:54.850 "params": { 00:18:54.850 "max_subsystems": 1024 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_set_crdt", 00:18:54.850 "params": { 00:18:54.850 "crdt1": 0, 00:18:54.850 "crdt2": 0, 00:18:54.850 "crdt3": 0 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_create_transport", 00:18:54.850 "params": { 00:18:54.850 "trtype": "TCP", 00:18:54.850 "max_queue_depth": 128, 00:18:54.850 "max_io_qpairs_per_ctrlr": 127, 00:18:54.850 "in_capsule_data_size": 4096, 00:18:54.850 "max_io_size": 131072, 00:18:54.850 "io_unit_size": 131072, 00:18:54.850 "max_aq_depth": 128, 00:18:54.850 "num_shared_buffers": 511, 00:18:54.850 "buf_cache_size": 4294967295, 00:18:54.850 "dif_insert_or_strip": false, 00:18:54.850 "zcopy": false, 00:18:54.850 "c2h_success": false, 00:18:54.850 "sock_priority": 0, 00:18:54.850 "abort_timeout_sec": 1, 00:18:54.850 "ack_timeout": 0, 00:18:54.850 "data_wr_pool_size": 0 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_create_subsystem", 00:18:54.850 "params": { 00:18:54.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.850 "allow_any_host": false, 00:18:54.850 "serial_number": "00000000000000000000", 00:18:54.850 "model_number": "SPDK bdev Controller", 00:18:54.850 "max_namespaces": 32, 00:18:54.850 "min_cntlid": 1, 00:18:54.850 "max_cntlid": 65519, 00:18:54.850 "ana_reporting": false 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_subsystem_add_host", 00:18:54.850 "params": { 00:18:54.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.850 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.850 "psk": "key0" 00:18:54.850 } 00:18:54.850 }, 00:18:54.850 { 00:18:54.850 "method": "nvmf_subsystem_add_ns", 00:18:54.850 "params": { 00:18:54.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.850 "namespace": { 00:18:54.850 "nsid": 1, 00:18:54.850 "bdev_name": "malloc0", 00:18:54.851 "nguid": "C4A24B5CD49F463DBD155F1A9A95D4DA", 00:18:54.851 "uuid": "c4a24b5c-d49f-463d-bd15-5f1a9a95d4da", 00:18:54.851 "no_auto_visible": false 00:18:54.851 } 00:18:54.851 } 00:18:54.851 }, 00:18:54.851 { 00:18:54.851 "method": "nvmf_subsystem_add_listener", 00:18:54.851 "params": { 00:18:54.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.851 "listen_address": { 00:18:54.851 "trtype": "TCP", 00:18:54.851 "adrfam": "IPv4", 00:18:54.851 "traddr": "10.0.0.2", 00:18:54.851 "trsvcid": "4420" 00:18:54.851 }, 00:18:54.851 "secure_channel": false, 00:18:54.851 "sock_impl": "ssl" 00:18:54.851 } 00:18:54.851 } 00:18:54.851 ] 00:18:54.851 } 00:18:54.851 ] 00:18:54.851 }' 00:18:54.851 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:55.111 "subsystems": [ 00:18:55.111 { 00:18:55.111 "subsystem": "keyring", 00:18:55.111 "config": [ 00:18:55.111 { 00:18:55.111 "method": "keyring_file_add_key", 00:18:55.111 "params": { 00:18:55.111 "name": "key0", 00:18:55.111 "path": "/tmp/tmp.v4ESxVSNll" 00:18:55.111 } 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "iobuf", 00:18:55.111 "config": [ 00:18:55.111 { 00:18:55.111 "method": "iobuf_set_options", 00:18:55.111 "params": { 00:18:55.111 "small_pool_count": 8192, 00:18:55.111 "large_pool_count": 1024, 00:18:55.111 "small_bufsize": 8192, 00:18:55.111 "large_bufsize": 135168, 00:18:55.111 "enable_numa": false 00:18:55.111 } 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "sock", 00:18:55.111 "config": [ 00:18:55.111 { 00:18:55.111 "method": "sock_set_default_impl", 00:18:55.111 "params": { 00:18:55.111 "impl_name": "posix" 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "sock_impl_set_options", 00:18:55.111 "params": { 00:18:55.111 "impl_name": "ssl", 00:18:55.111 "recv_buf_size": 4096, 00:18:55.111 "send_buf_size": 4096, 00:18:55.111 "enable_recv_pipe": true, 00:18:55.111 "enable_quickack": false, 00:18:55.111 "enable_placement_id": 0, 00:18:55.111 "enable_zerocopy_send_server": true, 00:18:55.111 "enable_zerocopy_send_client": false, 00:18:55.111 "zerocopy_threshold": 0, 00:18:55.111 "tls_version": 0, 00:18:55.111 "enable_ktls": false 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "sock_impl_set_options", 00:18:55.111 "params": { 00:18:55.111 "impl_name": "posix", 00:18:55.111 "recv_buf_size": 2097152, 00:18:55.111 "send_buf_size": 2097152, 00:18:55.111 "enable_recv_pipe": true, 00:18:55.111 "enable_quickack": false, 00:18:55.111 "enable_placement_id": 0, 00:18:55.111 "enable_zerocopy_send_server": true, 00:18:55.111 "enable_zerocopy_send_client": false, 00:18:55.111 "zerocopy_threshold": 0, 00:18:55.111 "tls_version": 0, 00:18:55.111 "enable_ktls": false 00:18:55.111 } 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "vmd", 00:18:55.111 "config": [] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "accel", 00:18:55.111 "config": [ 00:18:55.111 { 00:18:55.111 "method": "accel_set_options", 00:18:55.111 "params": { 00:18:55.111 "small_cache_size": 128, 00:18:55.111 "large_cache_size": 16, 00:18:55.111 "task_count": 2048, 00:18:55.111 "sequence_count": 2048, 00:18:55.111 "buf_count": 2048 00:18:55.111 } 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "bdev", 00:18:55.111 "config": [ 00:18:55.111 { 00:18:55.111 "method": "bdev_set_options", 00:18:55.111 "params": { 00:18:55.111 "bdev_io_pool_size": 65535, 00:18:55.111 "bdev_io_cache_size": 256, 00:18:55.111 "bdev_auto_examine": true, 00:18:55.111 "iobuf_small_cache_size": 128, 00:18:55.111 "iobuf_large_cache_size": 16 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_raid_set_options", 00:18:55.111 "params": { 00:18:55.111 "process_window_size_kb": 1024, 00:18:55.111 "process_max_bandwidth_mb_sec": 0 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_iscsi_set_options", 00:18:55.111 "params": { 00:18:55.111 "timeout_sec": 30 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_nvme_set_options", 00:18:55.111 "params": { 00:18:55.111 "action_on_timeout": "none", 00:18:55.111 "timeout_us": 0, 00:18:55.111 "timeout_admin_us": 0, 00:18:55.111 "keep_alive_timeout_ms": 10000, 00:18:55.111 "arbitration_burst": 0, 00:18:55.111 "low_priority_weight": 0, 00:18:55.111 "medium_priority_weight": 0, 00:18:55.111 "high_priority_weight": 0, 00:18:55.111 "nvme_adminq_poll_period_us": 10000, 00:18:55.111 "nvme_ioq_poll_period_us": 0, 00:18:55.111 "io_queue_requests": 512, 00:18:55.111 "delay_cmd_submit": true, 00:18:55.111 "transport_retry_count": 4, 00:18:55.111 "bdev_retry_count": 3, 00:18:55.111 "transport_ack_timeout": 0, 00:18:55.111 "ctrlr_loss_timeout_sec": 0, 00:18:55.111 "reconnect_delay_sec": 0, 00:18:55.111 "fast_io_fail_timeout_sec": 0, 00:18:55.111 "disable_auto_failback": false, 00:18:55.111 "generate_uuids": false, 00:18:55.111 "transport_tos": 0, 00:18:55.111 "nvme_error_stat": false, 00:18:55.111 "rdma_srq_size": 0, 00:18:55.111 "io_path_stat": false, 00:18:55.111 "allow_accel_sequence": false, 00:18:55.111 "rdma_max_cq_size": 0, 00:18:55.111 "rdma_cm_event_timeout_ms": 0, 00:18:55.111 "dhchap_digests": [ 00:18:55.111 "sha256", 00:18:55.111 "sha384", 00:18:55.111 "sha512" 00:18:55.111 ], 00:18:55.111 "dhchap_dhgroups": [ 00:18:55.111 "null", 00:18:55.111 "ffdhe2048", 00:18:55.111 "ffdhe3072", 00:18:55.111 "ffdhe4096", 00:18:55.111 "ffdhe6144", 00:18:55.111 "ffdhe8192" 00:18:55.111 ] 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_nvme_attach_controller", 00:18:55.111 "params": { 00:18:55.111 "name": "nvme0", 00:18:55.111 "trtype": "TCP", 00:18:55.111 "adrfam": "IPv4", 00:18:55.111 "traddr": "10.0.0.2", 00:18:55.111 "trsvcid": "4420", 00:18:55.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.111 "prchk_reftag": false, 00:18:55.111 "prchk_guard": false, 00:18:55.111 "ctrlr_loss_timeout_sec": 0, 00:18:55.111 "reconnect_delay_sec": 0, 00:18:55.111 "fast_io_fail_timeout_sec": 0, 00:18:55.111 "psk": "key0", 00:18:55.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.111 "hdgst": false, 00:18:55.111 "ddgst": false, 00:18:55.111 "multipath": "multipath" 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_nvme_set_hotplug", 00:18:55.111 "params": { 00:18:55.111 "period_us": 100000, 00:18:55.111 "enable": false 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_enable_histogram", 00:18:55.111 "params": { 00:18:55.111 "name": "nvme0n1", 00:18:55.111 "enable": true 00:18:55.111 } 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "method": "bdev_wait_for_examine" 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }, 00:18:55.111 { 00:18:55.111 "subsystem": "nbd", 00:18:55.111 "config": [] 00:18:55.111 } 00:18:55.111 ] 00:18:55.111 }' 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2185883 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2185883 ']' 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2185883 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185883 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185883' 00:18:55.111 killing process with pid 2185883 00:18:55.111 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2185883 00:18:55.111 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.111 00:18:55.111 Latency(us) 00:18:55.111 [2024-11-20T14:27:59.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.112 [2024-11-20T14:27:59.020Z] =================================================================================================================== 00:18:55.112 [2024-11-20T14:27:59.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.112 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2185883 00:18:55.112 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2185851 00:18:55.112 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2185851 ']' 00:18:55.112 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2185851 00:18:55.112 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185851 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185851' 00:18:55.371 killing process with pid 2185851 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2185851 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2185851 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.371 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:55.371 "subsystems": [ 00:18:55.371 { 00:18:55.371 "subsystem": "keyring", 00:18:55.371 "config": [ 00:18:55.371 { 00:18:55.371 "method": "keyring_file_add_key", 00:18:55.371 "params": { 00:18:55.371 "name": "key0", 00:18:55.371 "path": "/tmp/tmp.v4ESxVSNll" 00:18:55.371 } 00:18:55.371 } 00:18:55.371 ] 00:18:55.371 }, 00:18:55.371 { 00:18:55.371 "subsystem": "iobuf", 00:18:55.371 "config": [ 00:18:55.371 { 00:18:55.371 "method": "iobuf_set_options", 00:18:55.371 "params": { 00:18:55.371 "small_pool_count": 8192, 00:18:55.371 "large_pool_count": 1024, 00:18:55.371 "small_bufsize": 8192, 00:18:55.371 "large_bufsize": 135168, 00:18:55.371 "enable_numa": false 00:18:55.371 } 00:18:55.371 } 00:18:55.371 ] 00:18:55.371 }, 00:18:55.371 { 00:18:55.371 "subsystem": "sock", 00:18:55.371 "config": [ 00:18:55.371 { 00:18:55.371 "method": "sock_set_default_impl", 00:18:55.372 "params": { 00:18:55.372 "impl_name": "posix" 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "sock_impl_set_options", 00:18:55.372 "params": { 00:18:55.372 "impl_name": "ssl", 00:18:55.372 "recv_buf_size": 4096, 00:18:55.372 "send_buf_size": 4096, 00:18:55.372 "enable_recv_pipe": true, 00:18:55.372 "enable_quickack": false, 00:18:55.372 "enable_placement_id": 0, 00:18:55.372 "enable_zerocopy_send_server": true, 00:18:55.372 "enable_zerocopy_send_client": false, 00:18:55.372 "zerocopy_threshold": 0, 00:18:55.372 "tls_version": 0, 00:18:55.372 "enable_ktls": false 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "sock_impl_set_options", 00:18:55.372 "params": { 00:18:55.372 "impl_name": "posix", 00:18:55.372 "recv_buf_size": 2097152, 00:18:55.372 "send_buf_size": 2097152, 00:18:55.372 "enable_recv_pipe": true, 00:18:55.372 "enable_quickack": false, 00:18:55.372 "enable_placement_id": 0, 00:18:55.372 "enable_zerocopy_send_server": true, 00:18:55.372 "enable_zerocopy_send_client": false, 00:18:55.372 "zerocopy_threshold": 0, 00:18:55.372 "tls_version": 0, 00:18:55.372 "enable_ktls": false 00:18:55.372 } 00:18:55.372 } 00:18:55.372 ] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "vmd", 00:18:55.372 "config": [] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "accel", 00:18:55.372 "config": [ 00:18:55.372 { 00:18:55.372 "method": "accel_set_options", 00:18:55.372 "params": { 00:18:55.372 "small_cache_size": 128, 00:18:55.372 "large_cache_size": 16, 00:18:55.372 "task_count": 2048, 00:18:55.372 "sequence_count": 2048, 00:18:55.372 "buf_count": 2048 00:18:55.372 } 00:18:55.372 } 00:18:55.372 ] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "bdev", 00:18:55.372 "config": [ 00:18:55.372 { 00:18:55.372 "method": "bdev_set_options", 00:18:55.372 "params": { 00:18:55.372 "bdev_io_pool_size": 65535, 00:18:55.372 "bdev_io_cache_size": 256, 00:18:55.372 "bdev_auto_examine": true, 00:18:55.372 "iobuf_small_cache_size": 128, 00:18:55.372 "iobuf_large_cache_size": 16 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_raid_set_options", 00:18:55.372 "params": { 00:18:55.372 "process_window_size_kb": 1024, 00:18:55.372 "process_max_bandwidth_mb_sec": 0 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_iscsi_set_options", 00:18:55.372 "params": { 00:18:55.372 "timeout_sec": 30 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_nvme_set_options", 00:18:55.372 "params": { 00:18:55.372 "action_on_timeout": "none", 00:18:55.372 "timeout_us": 0, 00:18:55.372 "timeout_admin_us": 0, 00:18:55.372 "keep_alive_timeout_ms": 10000, 00:18:55.372 "arbitration_burst": 0, 00:18:55.372 "low_priority_weight": 0, 00:18:55.372 "medium_priority_weight": 0, 00:18:55.372 "high_priority_weight": 0, 00:18:55.372 "nvme_adminq_poll_period_us": 10000, 00:18:55.372 "nvme_ioq_poll_period_us": 0, 00:18:55.372 "io_queue_requests": 0, 00:18:55.372 "delay_cmd_submit": true, 00:18:55.372 "transport_retry_count": 4, 00:18:55.372 "bdev_retry_count": 3, 00:18:55.372 "transport_ack_timeout": 0, 00:18:55.372 "ctrlr_loss_timeout_sec": 0, 00:18:55.372 "reconnect_delay_sec": 0, 00:18:55.372 "fast_io_fail_timeout_sec": 0, 00:18:55.372 "disable_auto_failback": false, 00:18:55.372 "generate_uuids": false, 00:18:55.372 "transport_tos": 0, 00:18:55.372 "nvme_error_stat": false, 00:18:55.372 "rdma_srq_size": 0, 00:18:55.372 "io_path_stat": false, 00:18:55.372 "allow_accel_sequence": false, 00:18:55.372 "rdma_max_cq_size": 0, 00:18:55.372 "rdma_cm_event_timeout_ms": 0, 00:18:55.372 "dhchap_digests": [ 00:18:55.372 "sha256", 00:18:55.372 "sha384", 00:18:55.372 "sha512" 00:18:55.372 ], 00:18:55.372 "dhchap_dhgroups": [ 00:18:55.372 "null", 00:18:55.372 "ffdhe2048", 00:18:55.372 "ffdhe3072", 00:18:55.372 "ffdhe4096", 00:18:55.372 "ffdhe6144", 00:18:55.372 "ffdhe8192" 00:18:55.372 ] 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_nvme_set_hotplug", 00:18:55.372 "params": { 00:18:55.372 "period_us": 100000, 00:18:55.372 "enable": false 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_malloc_create", 00:18:55.372 "params": { 00:18:55.372 "name": "malloc0", 00:18:55.372 "num_blocks": 8192, 00:18:55.372 "block_size": 4096, 00:18:55.372 "physical_block_size": 4096, 00:18:55.372 "uuid": "c4a24b5c-d49f-463d-bd15-5f1a9a95d4da", 00:18:55.372 "optimal_io_boundary": 0, 00:18:55.372 "md_size": 0, 00:18:55.372 "dif_type": 0, 00:18:55.372 "dif_is_head_of_md": false, 00:18:55.372 "dif_pi_format": 0 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "bdev_wait_for_examine" 00:18:55.372 } 00:18:55.372 ] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "nbd", 00:18:55.372 "config": [] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "scheduler", 00:18:55.372 "config": [ 00:18:55.372 { 00:18:55.372 "method": "framework_set_scheduler", 00:18:55.372 "params": { 00:18:55.372 "name": "static" 00:18:55.372 } 00:18:55.372 } 00:18:55.372 ] 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "subsystem": "nvmf", 00:18:55.372 "config": [ 00:18:55.372 { 00:18:55.372 "method": "nvmf_set_config", 00:18:55.372 "params": { 00:18:55.372 "discovery_filter": "match_any", 00:18:55.372 "admin_cmd_passthru": { 00:18:55.372 "identify_ctrlr": false 00:18:55.372 }, 00:18:55.372 "dhchap_digests": [ 00:18:55.372 "sha256", 00:18:55.372 "sha384", 00:18:55.372 "sha512" 00:18:55.372 ], 00:18:55.372 "dhchap_dhgroups": [ 00:18:55.372 "null", 00:18:55.372 "ffdhe2048", 00:18:55.372 "ffdhe3072", 00:18:55.372 "ffdhe4096", 00:18:55.372 "ffdhe6144", 00:18:55.372 "ffdhe8192" 00:18:55.372 ] 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "nvmf_set_max_subsystems", 00:18:55.372 "params": { 00:18:55.372 "max_subsystems": 1024 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "nvmf_set_crdt", 00:18:55.372 "params": { 00:18:55.372 "crdt1": 0, 00:18:55.372 "crdt2": 0, 00:18:55.372 "crdt3": 0 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "nvmf_create_transport", 00:18:55.372 "params": { 00:18:55.372 "trtype": "TCP", 00:18:55.372 "max_queue_depth": 128, 00:18:55.372 "max_io_qpairs_per_ctrlr": 127, 00:18:55.372 "in_capsule_data_size": 4096, 00:18:55.372 "max_io_size": 131072, 00:18:55.372 "io_unit_size": 131072, 00:18:55.372 "max_aq_depth": 128, 00:18:55.372 "num_shared_buffers": 511, 00:18:55.372 "buf_cache_size": 4294967295, 00:18:55.372 "dif_insert_or_strip": false, 00:18:55.372 "zcopy": false, 00:18:55.372 "c2h_success": false, 00:18:55.372 "sock_priority": 0, 00:18:55.372 "abort_timeout_sec": 1, 00:18:55.372 "ack_timeout": 0, 00:18:55.372 "data_wr_pool_size": 0 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "nvmf_create_subsystem", 00:18:55.372 "params": { 00:18:55.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.372 "allow_any_host": false, 00:18:55.372 "serial_number": "00000000000000000000", 00:18:55.372 "model_number": "SPDK bdev Controller", 00:18:55.372 "max_namespaces": 32, 00:18:55.372 "min_cntlid": 1, 00:18:55.372 "max_cntlid": 65519, 00:18:55.372 "ana_reporting": false 00:18:55.372 } 00:18:55.372 }, 00:18:55.372 { 00:18:55.372 "method": "nvmf_subsystem_add_host", 00:18:55.372 "params": { 00:18:55.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.373 "host": "nqn.2016-06.io.spdk:host1", 00:18:55.373 "psk": "key0" 00:18:55.373 } 00:18:55.373 }, 00:18:55.373 { 00:18:55.373 "method": "nvmf_subsystem_add_ns", 00:18:55.373 "params": { 00:18:55.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.373 "namespace": { 00:18:55.373 "nsid": 1, 00:18:55.373 "bdev_name": "malloc0", 00:18:55.373 "nguid": "C4A24B5CD49F463DBD155F1A9A95D4DA", 00:18:55.373 "uuid": "c4a24b5c-d49f-463d-bd15-5f1a9a95d4da", 00:18:55.373 "no_auto_visible": false 00:18:55.373 } 00:18:55.373 } 00:18:55.373 }, 00:18:55.373 { 00:18:55.373 "method": "nvmf_subsystem_add_listener", 00:18:55.373 "params": { 00:18:55.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.373 "listen_address": { 00:18:55.373 "trtype": "TCP", 00:18:55.373 "adrfam": "IPv4", 00:18:55.373 "traddr": "10.0.0.2", 00:18:55.373 "trsvcid": "4420" 00:18:55.373 }, 00:18:55.373 "secure_channel": false, 00:18:55.373 "sock_impl": "ssl" 00:18:55.373 } 00:18:55.373 } 00:18:55.373 ] 00:18:55.373 } 00:18:55.373 ] 00:18:55.373 }' 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2186348 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2186348 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2186348 ']' 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.373 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.632 [2024-11-20 15:27:59.287370] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:55.632 [2024-11-20 15:27:59.287421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.632 [2024-11-20 15:27:59.367733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.632 [2024-11-20 15:27:59.404152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.632 [2024-11-20 15:27:59.404187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.632 [2024-11-20 15:27:59.404194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.632 [2024-11-20 15:27:59.404199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.632 [2024-11-20 15:27:59.404204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.632 [2024-11-20 15:27:59.404780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.891 [2024-11-20 15:27:59.618117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.891 [2024-11-20 15:27:59.650145] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.891 [2024-11-20 15:27:59.650371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2186595 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2186595 /var/tmp/bdevperf.sock 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2186595 ']' 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.461 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:56.461 "subsystems": [ 00:18:56.461 { 00:18:56.461 "subsystem": "keyring", 00:18:56.461 "config": [ 00:18:56.461 { 00:18:56.461 "method": "keyring_file_add_key", 00:18:56.461 "params": { 00:18:56.461 "name": "key0", 00:18:56.461 "path": "/tmp/tmp.v4ESxVSNll" 00:18:56.461 } 00:18:56.461 } 00:18:56.461 ] 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "subsystem": "iobuf", 00:18:56.461 "config": [ 00:18:56.461 { 00:18:56.461 "method": "iobuf_set_options", 00:18:56.461 "params": { 00:18:56.461 "small_pool_count": 8192, 00:18:56.461 "large_pool_count": 1024, 00:18:56.461 "small_bufsize": 8192, 00:18:56.461 "large_bufsize": 135168, 00:18:56.461 "enable_numa": false 00:18:56.461 } 00:18:56.461 } 00:18:56.461 ] 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "subsystem": "sock", 00:18:56.461 "config": [ 00:18:56.461 { 00:18:56.461 "method": "sock_set_default_impl", 00:18:56.461 "params": { 00:18:56.461 "impl_name": "posix" 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "sock_impl_set_options", 00:18:56.461 "params": { 00:18:56.461 "impl_name": "ssl", 00:18:56.461 "recv_buf_size": 4096, 00:18:56.461 "send_buf_size": 4096, 00:18:56.461 "enable_recv_pipe": true, 00:18:56.461 "enable_quickack": false, 00:18:56.461 "enable_placement_id": 0, 00:18:56.461 "enable_zerocopy_send_server": true, 00:18:56.461 "enable_zerocopy_send_client": false, 00:18:56.461 "zerocopy_threshold": 0, 00:18:56.461 "tls_version": 0, 00:18:56.461 "enable_ktls": false 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "sock_impl_set_options", 00:18:56.461 "params": { 00:18:56.461 "impl_name": "posix", 00:18:56.461 "recv_buf_size": 2097152, 00:18:56.461 "send_buf_size": 2097152, 00:18:56.461 "enable_recv_pipe": true, 00:18:56.461 "enable_quickack": false, 00:18:56.461 "enable_placement_id": 0, 00:18:56.461 "enable_zerocopy_send_server": true, 00:18:56.461 "enable_zerocopy_send_client": false, 00:18:56.461 "zerocopy_threshold": 0, 00:18:56.461 "tls_version": 0, 00:18:56.461 "enable_ktls": false 00:18:56.461 } 00:18:56.461 } 00:18:56.461 ] 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "subsystem": "vmd", 00:18:56.461 "config": [] 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "subsystem": "accel", 00:18:56.461 "config": [ 00:18:56.461 { 00:18:56.461 "method": "accel_set_options", 00:18:56.461 "params": { 00:18:56.461 "small_cache_size": 128, 00:18:56.461 "large_cache_size": 16, 00:18:56.461 "task_count": 2048, 00:18:56.461 "sequence_count": 2048, 00:18:56.461 "buf_count": 2048 00:18:56.461 } 00:18:56.461 } 00:18:56.461 ] 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "subsystem": "bdev", 00:18:56.461 "config": [ 00:18:56.461 { 00:18:56.461 "method": "bdev_set_options", 00:18:56.461 "params": { 00:18:56.461 "bdev_io_pool_size": 65535, 00:18:56.461 "bdev_io_cache_size": 256, 00:18:56.461 "bdev_auto_examine": true, 00:18:56.461 "iobuf_small_cache_size": 128, 00:18:56.461 "iobuf_large_cache_size": 16 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "bdev_raid_set_options", 00:18:56.461 "params": { 00:18:56.461 "process_window_size_kb": 1024, 00:18:56.461 "process_max_bandwidth_mb_sec": 0 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "bdev_iscsi_set_options", 00:18:56.461 "params": { 00:18:56.461 "timeout_sec": 30 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "bdev_nvme_set_options", 00:18:56.461 "params": { 00:18:56.461 "action_on_timeout": "none", 00:18:56.461 "timeout_us": 0, 00:18:56.461 "timeout_admin_us": 0, 00:18:56.461 "keep_alive_timeout_ms": 10000, 00:18:56.461 "arbitration_burst": 0, 00:18:56.461 "low_priority_weight": 0, 00:18:56.461 "medium_priority_weight": 0, 00:18:56.461 "high_priority_weight": 0, 00:18:56.461 "nvme_adminq_poll_period_us": 10000, 00:18:56.461 "nvme_ioq_poll_period_us": 0, 00:18:56.461 "io_queue_requests": 512, 00:18:56.461 "delay_cmd_submit": true, 00:18:56.461 "transport_retry_count": 4, 00:18:56.461 "bdev_retry_count": 3, 00:18:56.461 "transport_ack_timeout": 0, 00:18:56.461 "ctrlr_loss_timeout_sec": 0, 00:18:56.461 "reconnect_delay_sec": 0, 00:18:56.461 "fast_io_fail_timeout_sec": 0, 00:18:56.461 "disable_auto_failback": false, 00:18:56.461 "generate_uuids": false, 00:18:56.461 "transport_tos": 0, 00:18:56.461 "nvme_error_stat": false, 00:18:56.461 "rdma_srq_size": 0, 00:18:56.461 "io_path_stat": false, 00:18:56.461 "allow_accel_sequence": false, 00:18:56.461 "rdma_max_cq_size": 0, 00:18:56.461 "rdma_cm_event_timeout_ms": 0, 00:18:56.461 "dhchap_digests": [ 00:18:56.461 "sha256", 00:18:56.461 "sha384", 00:18:56.461 "sha512" 00:18:56.461 ], 00:18:56.461 "dhchap_dhgroups": [ 00:18:56.461 "null", 00:18:56.461 "ffdhe2048", 00:18:56.461 "ffdhe3072", 00:18:56.461 "ffdhe4096", 00:18:56.461 "ffdhe6144", 00:18:56.461 "ffdhe8192" 00:18:56.461 ] 00:18:56.461 } 00:18:56.461 }, 00:18:56.461 { 00:18:56.461 "method": "bdev_nvme_attach_controller", 00:18:56.461 "params": { 00:18:56.461 "name": "nvme0", 00:18:56.461 "trtype": "TCP", 00:18:56.461 "adrfam": "IPv4", 00:18:56.461 "traddr": "10.0.0.2", 00:18:56.461 "trsvcid": "4420", 00:18:56.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.461 "prchk_reftag": false, 00:18:56.461 "prchk_guard": false, 00:18:56.461 "ctrlr_loss_timeout_sec": 0, 00:18:56.462 "reconnect_delay_sec": 0, 00:18:56.462 "fast_io_fail_timeout_sec": 0, 00:18:56.462 "psk": "key0", 00:18:56.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.462 "hdgst": false, 00:18:56.462 "ddgst": false, 00:18:56.462 "multipath": "multipath" 00:18:56.462 } 00:18:56.462 }, 00:18:56.462 { 00:18:56.462 "method": "bdev_nvme_set_hotplug", 00:18:56.462 "params": { 00:18:56.462 "period_us": 100000, 00:18:56.462 "enable": false 00:18:56.462 } 00:18:56.462 }, 00:18:56.462 { 00:18:56.462 "method": "bdev_enable_histogram", 00:18:56.462 "params": { 00:18:56.462 "name": "nvme0n1", 00:18:56.462 "enable": true 00:18:56.462 } 00:18:56.462 }, 00:18:56.462 { 00:18:56.462 "method": "bdev_wait_for_examine" 00:18:56.462 } 00:18:56.462 ] 00:18:56.462 }, 00:18:56.462 { 00:18:56.462 "subsystem": "nbd", 00:18:56.462 "config": [] 00:18:56.462 } 00:18:56.462 ] 00:18:56.462 }' 00:18:56.462 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.462 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.462 [2024-11-20 15:28:00.217257] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:18:56.462 [2024-11-20 15:28:00.217307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186595 ] 00:18:56.462 [2024-11-20 15:28:00.294848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.462 [2024-11-20 15:28:00.336071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.721 [2024-11-20 15:28:00.489037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.288 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.288 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:57.288 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.288 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:57.547 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.547 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.547 Running I/O for 1 seconds... 00:18:58.743 5148.00 IOPS, 20.11 MiB/s 00:18:58.743 Latency(us) 00:18:58.743 [2024-11-20T14:28:02.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.743 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.743 Verification LBA range: start 0x0 length 0x2000 00:18:58.743 nvme0n1 : 1.02 5180.36 20.24 0.00 0.00 24499.16 6781.55 51972.90 00:18:58.743 [2024-11-20T14:28:02.651Z] =================================================================================================================== 00:18:58.743 [2024-11-20T14:28:02.651Z] Total : 5180.36 20.24 0.00 0.00 24499.16 6781.55 51972.90 00:18:58.743 { 00:18:58.743 "results": [ 00:18:58.743 { 00:18:58.743 "job": "nvme0n1", 00:18:58.743 "core_mask": "0x2", 00:18:58.743 "workload": "verify", 00:18:58.743 "status": "finished", 00:18:58.743 "verify_range": { 00:18:58.743 "start": 0, 00:18:58.743 "length": 8192 00:18:58.743 }, 00:18:58.743 "queue_depth": 128, 00:18:58.743 "io_size": 4096, 00:18:58.743 "runtime": 1.018656, 00:18:58.743 "iops": 5180.3552916784465, 00:18:58.743 "mibps": 20.23576285811893, 00:18:58.743 "io_failed": 0, 00:18:58.743 "io_timeout": 0, 00:18:58.743 "avg_latency_us": 24499.160388560693, 00:18:58.743 "min_latency_us": 6781.551304347826, 00:18:58.743 "max_latency_us": 51972.897391304345 00:18:58.743 } 00:18:58.743 ], 00:18:58.743 "core_count": 1 00:18:58.743 } 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:58.743 nvmf_trace.0 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2186595 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2186595 ']' 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2186595 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186595 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186595' 00:18:58.743 killing process with pid 2186595 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2186595 00:18:58.743 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.743 00:18:58.743 Latency(us) 00:18:58.743 [2024-11-20T14:28:02.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.743 [2024-11-20T14:28:02.651Z] =================================================================================================================== 00:18:58.743 [2024-11-20T14:28:02.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.743 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2186595 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.003 rmmod nvme_tcp 00:18:59.003 rmmod nvme_fabrics 00:18:59.003 rmmod nvme_keyring 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2186348 ']' 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2186348 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2186348 ']' 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2186348 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186348 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186348' 00:18:59.003 killing process with pid 2186348 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2186348 00:18:59.003 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2186348 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.263 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.169 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:01.169 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.5OoTjj8Hbo /tmp/tmp.2DMUEMks4h /tmp/tmp.v4ESxVSNll 00:19:01.169 00:19:01.169 real 1m19.607s 00:19:01.169 user 2m2.559s 00:19:01.169 sys 0m30.058s 00:19:01.169 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.169 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.169 ************************************ 00:19:01.169 END TEST nvmf_tls 00:19:01.169 ************************************ 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.429 ************************************ 00:19:01.429 START TEST nvmf_fips 00:19:01.429 ************************************ 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:01.429 * Looking for test storage... 00:19:01.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:01.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.430 --rc genhtml_branch_coverage=1 00:19:01.430 --rc genhtml_function_coverage=1 00:19:01.430 --rc genhtml_legend=1 00:19:01.430 --rc geninfo_all_blocks=1 00:19:01.430 --rc geninfo_unexecuted_blocks=1 00:19:01.430 00:19:01.430 ' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:01.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.430 --rc genhtml_branch_coverage=1 00:19:01.430 --rc genhtml_function_coverage=1 00:19:01.430 --rc genhtml_legend=1 00:19:01.430 --rc geninfo_all_blocks=1 00:19:01.430 --rc geninfo_unexecuted_blocks=1 00:19:01.430 00:19:01.430 ' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:01.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.430 --rc genhtml_branch_coverage=1 00:19:01.430 --rc genhtml_function_coverage=1 00:19:01.430 --rc genhtml_legend=1 00:19:01.430 --rc geninfo_all_blocks=1 00:19:01.430 --rc geninfo_unexecuted_blocks=1 00:19:01.430 00:19:01.430 ' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:01.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.430 --rc genhtml_branch_coverage=1 00:19:01.430 --rc genhtml_function_coverage=1 00:19:01.430 --rc genhtml_legend=1 00:19:01.430 --rc geninfo_all_blocks=1 00:19:01.430 --rc geninfo_unexecuted_blocks=1 00:19:01.430 00:19:01.430 ' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:01.690 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:01.691 Error setting digest 00:19:01.691 40E2200E167F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:01.691 40E2200E167F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:01.691 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.265 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.265 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.265 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.266 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.266 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:19:08.266 00:19:08.266 --- 10.0.0.2 ping statistics --- 00:19:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.266 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:08.266 00:19:08.266 --- 10.0.0.1 ping statistics --- 00:19:08.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.266 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2190612 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2190612 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2190612 ']' 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.266 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.266 [2024-11-20 15:28:11.550063] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:08.266 [2024-11-20 15:28:11.550110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.266 [2024-11-20 15:28:11.616301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.266 [2024-11-20 15:28:11.658338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.266 [2024-11-20 15:28:11.658372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.266 [2024-11-20 15:28:11.658379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.266 [2024-11-20 15:28:11.658386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.266 [2024-11-20 15:28:11.658392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.266 [2024-11-20 15:28:11.658880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.qGW 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.qGW 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.qGW 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.qGW 00:19:08.525 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.784 [2024-11-20 15:28:12.600726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.784 [2024-11-20 15:28:12.616739] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.784 [2024-11-20 15:28:12.616945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.784 malloc0 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2190864 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2190864 /var/tmp/bdevperf.sock 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2190864 ']' 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.784 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.045 [2024-11-20 15:28:12.746986] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:09.045 [2024-11-20 15:28:12.747035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190864 ] 00:19:09.045 [2024-11-20 15:28:12.821555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.045 [2024-11-20 15:28:12.864093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.983 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.983 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:09.983 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.qGW 00:19:09.983 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.242 [2024-11-20 15:28:13.925735] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.242 TLSTESTn1 00:19:10.242 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.242 Running I/O for 10 seconds... 00:19:12.558 5346.00 IOPS, 20.88 MiB/s [2024-11-20T14:28:17.519Z] 5353.00 IOPS, 20.91 MiB/s [2024-11-20T14:28:18.455Z] 5427.00 IOPS, 21.20 MiB/s [2024-11-20T14:28:19.391Z] 5428.25 IOPS, 21.20 MiB/s [2024-11-20T14:28:20.326Z] 5296.40 IOPS, 20.69 MiB/s [2024-11-20T14:28:21.261Z] 5230.17 IOPS, 20.43 MiB/s [2024-11-20T14:28:22.197Z] 5178.00 IOPS, 20.23 MiB/s [2024-11-20T14:28:23.574Z] 5117.25 IOPS, 19.99 MiB/s [2024-11-20T14:28:24.511Z] 5099.11 IOPS, 19.92 MiB/s [2024-11-20T14:28:24.511Z] 5067.60 IOPS, 19.80 MiB/s 00:19:20.603 Latency(us) 00:19:20.603 [2024-11-20T14:28:24.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.603 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.603 Verification LBA range: start 0x0 length 0x2000 00:19:20.603 TLSTESTn1 : 10.02 5071.45 19.81 0.00 0.00 25202.18 5071.92 34192.70 00:19:20.603 [2024-11-20T14:28:24.511Z] =================================================================================================================== 00:19:20.603 [2024-11-20T14:28:24.511Z] Total : 5071.45 19.81 0.00 0.00 25202.18 5071.92 34192.70 00:19:20.603 { 00:19:20.603 "results": [ 00:19:20.603 { 00:19:20.603 "job": "TLSTESTn1", 00:19:20.603 "core_mask": "0x4", 00:19:20.603 "workload": "verify", 00:19:20.603 "status": "finished", 00:19:20.603 "verify_range": { 00:19:20.603 "start": 0, 00:19:20.603 "length": 8192 00:19:20.603 }, 00:19:20.603 "queue_depth": 128, 00:19:20.603 "io_size": 4096, 00:19:20.603 "runtime": 10.017445, 00:19:20.603 "iops": 5071.452850502299, 00:19:20.603 "mibps": 19.810362697274606, 00:19:20.603 "io_failed": 0, 00:19:20.603 "io_timeout": 0, 00:19:20.603 "avg_latency_us": 25202.182385600303, 00:19:20.603 "min_latency_us": 5071.91652173913, 00:19:20.603 "max_latency_us": 34192.69565217391 00:19:20.603 } 00:19:20.603 ], 00:19:20.603 "core_count": 1 00:19:20.603 } 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.603 nvmf_trace.0 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2190864 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2190864 ']' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2190864 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190864 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190864' 00:19:20.603 killing process with pid 2190864 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2190864 00:19:20.603 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.603 00:19:20.603 Latency(us) 00:19:20.603 [2024-11-20T14:28:24.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.603 [2024-11-20T14:28:24.511Z] =================================================================================================================== 00:19:20.603 [2024-11-20T14:28:24.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2190864 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.603 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:20.603 rmmod nvme_tcp 00:19:20.603 rmmod nvme_fabrics 00:19:20.863 rmmod nvme_keyring 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2190612 ']' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2190612 ']' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190612' 00:19:20.863 killing process with pid 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2190612 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.863 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.qGW 00:19:23.401 00:19:23.401 real 0m21.701s 00:19:23.401 user 0m22.950s 00:19:23.401 sys 0m10.165s 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.401 ************************************ 00:19:23.401 END TEST nvmf_fips 00:19:23.401 ************************************ 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:23.401 ************************************ 00:19:23.401 START TEST nvmf_control_msg_list 00:19:23.401 ************************************ 00:19:23.401 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:23.401 * Looking for test storage... 00:19:23.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:23.401 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.402 --rc genhtml_branch_coverage=1 00:19:23.402 --rc genhtml_function_coverage=1 00:19:23.402 --rc genhtml_legend=1 00:19:23.402 --rc geninfo_all_blocks=1 00:19:23.402 --rc geninfo_unexecuted_blocks=1 00:19:23.402 00:19:23.402 ' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.402 --rc genhtml_branch_coverage=1 00:19:23.402 --rc genhtml_function_coverage=1 00:19:23.402 --rc genhtml_legend=1 00:19:23.402 --rc geninfo_all_blocks=1 00:19:23.402 --rc geninfo_unexecuted_blocks=1 00:19:23.402 00:19:23.402 ' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.402 --rc genhtml_branch_coverage=1 00:19:23.402 --rc genhtml_function_coverage=1 00:19:23.402 --rc genhtml_legend=1 00:19:23.402 --rc geninfo_all_blocks=1 00:19:23.402 --rc geninfo_unexecuted_blocks=1 00:19:23.402 00:19:23.402 ' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.402 --rc genhtml_branch_coverage=1 00:19:23.402 --rc genhtml_function_coverage=1 00:19:23.402 --rc genhtml_legend=1 00:19:23.402 --rc geninfo_all_blocks=1 00:19:23.402 --rc geninfo_unexecuted_blocks=1 00:19:23.402 00:19:23.402 ' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.402 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.403 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.974 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.974 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.974 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.974 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.974 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:29.975 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:29.975 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:29.975 Found net devices under 0000:86:00.0: cvl_0_0 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:29.975 Found net devices under 0000:86:00.1: cvl_0_1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.975 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.975 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.975 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.975 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.975 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:19:29.975 00:19:29.975 --- 10.0.0.2 ping statistics --- 00:19:29.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.975 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:19:29.975 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:19:29.975 00:19:29.975 --- 10.0.0.1 ping statistics --- 00:19:29.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.976 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2196247 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2196247 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2196247 ']' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 [2024-11-20 15:28:33.154007] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:29.976 [2024-11-20 15:28:33.154060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.976 [2024-11-20 15:28:33.236814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.976 [2024-11-20 15:28:33.276868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.976 [2024-11-20 15:28:33.276906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.976 [2024-11-20 15:28:33.276914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.976 [2024-11-20 15:28:33.276921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.976 [2024-11-20 15:28:33.276926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.976 [2024-11-20 15:28:33.277448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 [2024-11-20 15:28:33.420095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 Malloc0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.976 [2024-11-20 15:28:33.460319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2196278 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2196279 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2196280 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2196278 00:19:29.976 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.976 [2024-11-20 15:28:33.559004] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:29.976 [2024-11-20 15:28:33.559183] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:29.976 [2024-11-20 15:28:33.559341] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.912 Initializing NVMe Controllers 00:19:30.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:30.912 Initialization complete. Launching workers. 00:19:30.912 ======================================================== 00:19:30.912 Latency(us) 00:19:30.912 Device Information : IOPS MiB/s Average min max 00:19:30.912 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5651.99 22.08 176.56 145.51 40531.71 00:19:30.913 ======================================================== 00:19:30.913 Total : 5651.99 22.08 176.56 145.51 40531.71 00:19:30.913 00:19:30.913 Initializing NVMe Controllers 00:19:30.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:30.913 Initialization complete. Launching workers. 00:19:30.913 ======================================================== 00:19:30.913 Latency(us) 00:19:30.913 Device Information : IOPS MiB/s Average min max 00:19:30.913 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40994.87 40721.83 41913.51 00:19:30.913 ======================================================== 00:19:30.913 Total : 25.00 0.10 40994.87 40721.83 41913.51 00:19:30.913 00:19:30.913 Initializing NVMe Controllers 00:19:30.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:30.913 Initialization complete. Launching workers. 00:19:30.913 ======================================================== 00:19:30.913 Latency(us) 00:19:30.913 Device Information : IOPS MiB/s Average min max 00:19:30.913 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6130.00 23.95 162.78 127.97 454.73 00:19:30.913 ======================================================== 00:19:30.913 Total : 6130.00 23.95 162.78 127.97 454.73 00:19:30.913 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2196279 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2196280 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.913 rmmod nvme_tcp 00:19:30.913 rmmod nvme_fabrics 00:19:30.913 rmmod nvme_keyring 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2196247 ']' 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2196247 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2196247 ']' 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2196247 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196247 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196247' 00:19:30.913 killing process with pid 2196247 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2196247 00:19:30.913 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2196247 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.172 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:33.712 00:19:33.712 real 0m10.117s 00:19:33.712 user 0m6.451s 00:19:33.712 sys 0m5.542s 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.712 ************************************ 00:19:33.712 END TEST nvmf_control_msg_list 00:19:33.712 ************************************ 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.712 ************************************ 00:19:33.712 START TEST nvmf_wait_for_buf 00:19:33.712 ************************************ 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:33.712 * Looking for test storage... 00:19:33.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.712 --rc genhtml_branch_coverage=1 00:19:33.712 --rc genhtml_function_coverage=1 00:19:33.712 --rc genhtml_legend=1 00:19:33.712 --rc geninfo_all_blocks=1 00:19:33.712 --rc geninfo_unexecuted_blocks=1 00:19:33.712 00:19:33.712 ' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.712 --rc genhtml_branch_coverage=1 00:19:33.712 --rc genhtml_function_coverage=1 00:19:33.712 --rc genhtml_legend=1 00:19:33.712 --rc geninfo_all_blocks=1 00:19:33.712 --rc geninfo_unexecuted_blocks=1 00:19:33.712 00:19:33.712 ' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.712 --rc genhtml_branch_coverage=1 00:19:33.712 --rc genhtml_function_coverage=1 00:19:33.712 --rc genhtml_legend=1 00:19:33.712 --rc geninfo_all_blocks=1 00:19:33.712 --rc geninfo_unexecuted_blocks=1 00:19:33.712 00:19:33.712 ' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.712 --rc genhtml_branch_coverage=1 00:19:33.712 --rc genhtml_function_coverage=1 00:19:33.712 --rc genhtml_legend=1 00:19:33.712 --rc geninfo_all_blocks=1 00:19:33.712 --rc geninfo_unexecuted_blocks=1 00:19:33.712 00:19:33.712 ' 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.712 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.713 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.280 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.280 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.280 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.281 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:19:40.281 00:19:40.281 --- 10.0.0.2 ping statistics --- 00:19:40.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.281 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:40.281 00:19:40.281 --- 10.0.0.1 ping statistics --- 00:19:40.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.281 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2200028 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2200028 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2200028 ']' 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 [2024-11-20 15:28:43.269348] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:19:40.281 [2024-11-20 15:28:43.269393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.281 [2024-11-20 15:28:43.334515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.281 [2024-11-20 15:28:43.376340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.281 [2024-11-20 15:28:43.376375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.281 [2024-11-20 15:28:43.376382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.281 [2024-11-20 15:28:43.376388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.281 [2024-11-20 15:28:43.376393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.281 [2024-11-20 15:28:43.376956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 Malloc0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 [2024-11-20 15:28:43.586843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.281 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 [2024-11-20 15:28:43.615047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.282 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.282 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.282 [2024-11-20 15:28:43.699403] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.659 Initializing NVMe Controllers 00:19:41.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:41.659 Initialization complete. Launching workers. 00:19:41.659 ======================================================== 00:19:41.659 Latency(us) 00:19:41.659 Device Information : IOPS MiB/s Average min max 00:19:41.659 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33635.41 30919.96 71077.84 00:19:41.659 ======================================================== 00:19:41.659 Total : 124.00 15.50 33635.41 30919.96 71077.84 00:19:41.659 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.659 rmmod nvme_tcp 00:19:41.659 rmmod nvme_fabrics 00:19:41.659 rmmod nvme_keyring 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:41.659 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2200028 ']' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2200028 ']' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2200028' 00:19:41.660 killing process with pid 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2200028 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.660 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.196 00:19:44.196 real 0m10.476s 00:19:44.196 user 0m4.052s 00:19:44.196 sys 0m4.907s 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:44.196 ************************************ 00:19:44.196 END TEST nvmf_wait_for_buf 00:19:44.196 ************************************ 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.196 15:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:49.470 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:49.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:49.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:49.471 Found net devices under 0000:86:00.0: cvl_0_0 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:49.471 Found net devices under 0000:86:00.1: cvl_0_1 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.471 ************************************ 00:19:49.471 START TEST nvmf_perf_adq 00:19:49.471 ************************************ 00:19:49.471 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.731 * Looking for test storage... 00:19:49.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.731 --rc genhtml_branch_coverage=1 00:19:49.731 --rc genhtml_function_coverage=1 00:19:49.731 --rc genhtml_legend=1 00:19:49.731 --rc geninfo_all_blocks=1 00:19:49.731 --rc geninfo_unexecuted_blocks=1 00:19:49.731 00:19:49.731 ' 00:19:49.731 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.731 --rc genhtml_branch_coverage=1 00:19:49.731 --rc genhtml_function_coverage=1 00:19:49.731 --rc genhtml_legend=1 00:19:49.731 --rc geninfo_all_blocks=1 00:19:49.731 --rc geninfo_unexecuted_blocks=1 00:19:49.731 00:19:49.732 ' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.732 --rc genhtml_branch_coverage=1 00:19:49.732 --rc genhtml_function_coverage=1 00:19:49.732 --rc genhtml_legend=1 00:19:49.732 --rc geninfo_all_blocks=1 00:19:49.732 --rc geninfo_unexecuted_blocks=1 00:19:49.732 00:19:49.732 ' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:49.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.732 --rc genhtml_branch_coverage=1 00:19:49.732 --rc genhtml_function_coverage=1 00:19:49.732 --rc genhtml_legend=1 00:19:49.732 --rc geninfo_all_blocks=1 00:19:49.732 --rc geninfo_unexecuted_blocks=1 00:19:49.732 00:19:49.732 ' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:49.732 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:56.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:56.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:56.303 Found net devices under 0000:86:00.0: cvl_0_0 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:56.303 Found net devices under 0000:86:00.1: cvl_0_1 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:56.303 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:56.561 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:59.095 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:04.368 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:04.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:04.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:04.369 Found net devices under 0000:86:00.0: cvl_0_0 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:04.369 Found net devices under 0000:86:00.1: cvl_0_1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:04.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:20:04.369 00:20:04.369 --- 10.0.0.2 ping statistics --- 00:20:04.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.369 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:20:04.369 00:20:04.369 --- 10.0.0.1 ping statistics --- 00:20:04.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.369 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.369 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2208362 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2208362 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2208362 ']' 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.370 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.370 [2024-11-20 15:29:07.751630] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:04.370 [2024-11-20 15:29:07.751678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.370 [2024-11-20 15:29:07.831117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.370 [2024-11-20 15:29:07.875834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.370 [2024-11-20 15:29:07.875871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.370 [2024-11-20 15:29:07.875879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.370 [2024-11-20 15:29:07.875884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.370 [2024-11-20 15:29:07.875890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.370 [2024-11-20 15:29:07.877416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.370 [2024-11-20 15:29:07.877528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.370 [2024-11-20 15:29:07.877636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.370 [2024-11-20 15:29:07.877637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 [2024-11-20 15:29:08.757258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 Malloc1 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.937 [2024-11-20 15:29:08.813434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2208612 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:04.937 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:07.465 "tick_rate": 2300000000, 00:20:07.465 "poll_groups": [ 00:20:07.465 { 00:20:07.465 "name": "nvmf_tgt_poll_group_000", 00:20:07.465 "admin_qpairs": 1, 00:20:07.465 "io_qpairs": 1, 00:20:07.465 "current_admin_qpairs": 1, 00:20:07.465 "current_io_qpairs": 1, 00:20:07.465 "pending_bdev_io": 0, 00:20:07.465 "completed_nvme_io": 18334, 00:20:07.465 "transports": [ 00:20:07.465 { 00:20:07.465 "trtype": "TCP" 00:20:07.465 } 00:20:07.465 ] 00:20:07.465 }, 00:20:07.465 { 00:20:07.465 "name": "nvmf_tgt_poll_group_001", 00:20:07.465 "admin_qpairs": 0, 00:20:07.465 "io_qpairs": 1, 00:20:07.465 "current_admin_qpairs": 0, 00:20:07.465 "current_io_qpairs": 1, 00:20:07.465 "pending_bdev_io": 0, 00:20:07.465 "completed_nvme_io": 18705, 00:20:07.465 "transports": [ 00:20:07.465 { 00:20:07.465 "trtype": "TCP" 00:20:07.465 } 00:20:07.465 ] 00:20:07.465 }, 00:20:07.465 { 00:20:07.465 "name": "nvmf_tgt_poll_group_002", 00:20:07.465 "admin_qpairs": 0, 00:20:07.465 "io_qpairs": 1, 00:20:07.465 "current_admin_qpairs": 0, 00:20:07.465 "current_io_qpairs": 1, 00:20:07.465 "pending_bdev_io": 0, 00:20:07.465 "completed_nvme_io": 18385, 00:20:07.465 "transports": [ 00:20:07.465 { 00:20:07.465 "trtype": "TCP" 00:20:07.465 } 00:20:07.465 ] 00:20:07.465 }, 00:20:07.465 { 00:20:07.465 "name": "nvmf_tgt_poll_group_003", 00:20:07.465 "admin_qpairs": 0, 00:20:07.465 "io_qpairs": 1, 00:20:07.465 "current_admin_qpairs": 0, 00:20:07.465 "current_io_qpairs": 1, 00:20:07.465 "pending_bdev_io": 0, 00:20:07.465 "completed_nvme_io": 18423, 00:20:07.465 "transports": [ 00:20:07.465 { 00:20:07.465 "trtype": "TCP" 00:20:07.465 } 00:20:07.465 ] 00:20:07.465 } 00:20:07.465 ] 00:20:07.465 }' 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:07.465 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2208612 00:20:15.592 Initializing NVMe Controllers 00:20:15.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:15.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:15.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:15.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:15.592 Initialization complete. Launching workers. 00:20:15.592 ======================================================== 00:20:15.592 Latency(us) 00:20:15.592 Device Information : IOPS MiB/s Average min max 00:20:15.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10499.80 41.01 6097.17 1598.42 10726.87 00:20:15.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10781.70 42.12 5937.59 2143.13 10404.40 00:20:15.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10665.70 41.66 6000.76 2364.85 9973.27 00:20:15.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10557.20 41.24 6063.20 1765.07 10256.69 00:20:15.592 ======================================================== 00:20:15.593 Total : 42504.40 166.03 6024.06 1598.42 10726.87 00:20:15.593 00:20:15.593 [2024-11-20 15:29:19.111913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d520 is same with the state(6) to be set 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.593 rmmod nvme_tcp 00:20:15.593 rmmod nvme_fabrics 00:20:15.593 rmmod nvme_keyring 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2208362 ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2208362 ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2208362' 00:20:15.593 killing process with pid 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2208362 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.593 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.132 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.132 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:18.132 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:18.132 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:19.071 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:20.981 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.388 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.389 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.389 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:20:26.389 00:20:26.389 --- 10.0.0.2 ping statistics --- 00:20:26.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.389 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:26.389 00:20:26.389 --- 10.0.0.1 ping statistics --- 00:20:26.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.389 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.389 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:26.390 net.core.busy_poll = 1 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:26.390 net.core.busy_read = 1 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:26.390 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2212398 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2212398 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2212398 ']' 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.390 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.390 [2024-11-20 15:29:30.202718] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:26.390 [2024-11-20 15:29:30.202770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.390 [2024-11-20 15:29:30.284371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.649 [2024-11-20 15:29:30.326622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.649 [2024-11-20 15:29:30.326659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.649 [2024-11-20 15:29:30.326667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.649 [2024-11-20 15:29:30.326674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.649 [2024-11-20 15:29:30.326679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.649 [2024-11-20 15:29:30.328219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.649 [2024-11-20 15:29:30.328330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.649 [2024-11-20 15:29:30.328412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.649 [2024-11-20 15:29:30.328413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.649 [2024-11-20 15:29:30.537042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.649 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.907 Malloc1 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.907 [2024-11-20 15:29:30.610733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2212429 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:26.907 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:28.812 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:28.812 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.812 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.812 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.812 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:28.812 "tick_rate": 2300000000, 00:20:28.812 "poll_groups": [ 00:20:28.812 { 00:20:28.812 "name": "nvmf_tgt_poll_group_000", 00:20:28.812 "admin_qpairs": 1, 00:20:28.812 "io_qpairs": 2, 00:20:28.812 "current_admin_qpairs": 1, 00:20:28.812 "current_io_qpairs": 2, 00:20:28.812 "pending_bdev_io": 0, 00:20:28.812 "completed_nvme_io": 29899, 00:20:28.812 "transports": [ 00:20:28.812 { 00:20:28.812 "trtype": "TCP" 00:20:28.812 } 00:20:28.812 ] 00:20:28.812 }, 00:20:28.812 { 00:20:28.812 "name": "nvmf_tgt_poll_group_001", 00:20:28.812 "admin_qpairs": 0, 00:20:28.812 "io_qpairs": 2, 00:20:28.812 "current_admin_qpairs": 0, 00:20:28.812 "current_io_qpairs": 2, 00:20:28.812 "pending_bdev_io": 0, 00:20:28.812 "completed_nvme_io": 26229, 00:20:28.812 "transports": [ 00:20:28.812 { 00:20:28.812 "trtype": "TCP" 00:20:28.812 } 00:20:28.812 ] 00:20:28.812 }, 00:20:28.812 { 00:20:28.812 "name": "nvmf_tgt_poll_group_002", 00:20:28.812 "admin_qpairs": 0, 00:20:28.812 "io_qpairs": 0, 00:20:28.812 "current_admin_qpairs": 0, 00:20:28.812 "current_io_qpairs": 0, 00:20:28.812 "pending_bdev_io": 0, 00:20:28.812 "completed_nvme_io": 0, 00:20:28.812 "transports": [ 00:20:28.812 { 00:20:28.812 "trtype": "TCP" 00:20:28.812 } 00:20:28.812 ] 00:20:28.812 }, 00:20:28.812 { 00:20:28.812 "name": "nvmf_tgt_poll_group_003", 00:20:28.812 "admin_qpairs": 0, 00:20:28.812 "io_qpairs": 0, 00:20:28.812 "current_admin_qpairs": 0, 00:20:28.812 "current_io_qpairs": 0, 00:20:28.812 "pending_bdev_io": 0, 00:20:28.812 "completed_nvme_io": 0, 00:20:28.813 "transports": [ 00:20:28.813 { 00:20:28.813 "trtype": "TCP" 00:20:28.813 } 00:20:28.813 ] 00:20:28.813 } 00:20:28.813 ] 00:20:28.813 }' 00:20:28.813 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:28.813 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:28.813 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:28.813 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:28.813 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2212429 00:20:36.936 Initializing NVMe Controllers 00:20:36.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:36.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:36.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:36.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:36.936 Initialization complete. Launching workers. 00:20:36.936 ======================================================== 00:20:36.936 Latency(us) 00:20:36.936 Device Information : IOPS MiB/s Average min max 00:20:36.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8121.30 31.72 7883.81 1391.34 54161.87 00:20:36.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7331.40 28.64 8729.87 1605.92 53768.70 00:20:36.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7445.80 29.09 8596.07 1184.22 52427.20 00:20:36.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6645.60 25.96 9631.57 1226.36 52634.73 00:20:36.936 ======================================================== 00:20:36.936 Total : 29544.10 115.41 8666.40 1184.22 54161.87 00:20:36.936 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.936 rmmod nvme_tcp 00:20:36.936 rmmod nvme_fabrics 00:20:36.936 rmmod nvme_keyring 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2212398 ']' 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2212398 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2212398 ']' 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2212398 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.936 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2212398 00:20:37.196 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.196 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.196 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2212398' 00:20:37.196 killing process with pid 2212398 00:20:37.196 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2212398 00:20:37.196 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2212398 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.196 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:40.489 00:20:40.489 real 0m50.814s 00:20:40.489 user 2m46.923s 00:20:40.489 sys 0m10.521s 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.489 ************************************ 00:20:40.489 END TEST nvmf_perf_adq 00:20:40.489 ************************************ 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.489 ************************************ 00:20:40.489 START TEST nvmf_shutdown 00:20:40.489 ************************************ 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:40.489 * Looking for test storage... 00:20:40.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.489 --rc genhtml_branch_coverage=1 00:20:40.489 --rc genhtml_function_coverage=1 00:20:40.489 --rc genhtml_legend=1 00:20:40.489 --rc geninfo_all_blocks=1 00:20:40.489 --rc geninfo_unexecuted_blocks=1 00:20:40.489 00:20:40.489 ' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.489 --rc genhtml_branch_coverage=1 00:20:40.489 --rc genhtml_function_coverage=1 00:20:40.489 --rc genhtml_legend=1 00:20:40.489 --rc geninfo_all_blocks=1 00:20:40.489 --rc geninfo_unexecuted_blocks=1 00:20:40.489 00:20:40.489 ' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.489 --rc genhtml_branch_coverage=1 00:20:40.489 --rc genhtml_function_coverage=1 00:20:40.489 --rc genhtml_legend=1 00:20:40.489 --rc geninfo_all_blocks=1 00:20:40.489 --rc geninfo_unexecuted_blocks=1 00:20:40.489 00:20:40.489 ' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.489 --rc genhtml_branch_coverage=1 00:20:40.489 --rc genhtml_function_coverage=1 00:20:40.489 --rc genhtml_legend=1 00:20:40.489 --rc geninfo_all_blocks=1 00:20:40.489 --rc geninfo_unexecuted_blocks=1 00:20:40.489 00:20:40.489 ' 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.489 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.490 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.490 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.490 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:40.749 ************************************ 00:20:40.749 START TEST nvmf_shutdown_tc1 00:20:40.749 ************************************ 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.749 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.316 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.317 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.317 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:20:47.317 00:20:47.317 --- 10.0.0.2 ping statistics --- 00:20:47.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.317 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:47.317 00:20:47.317 --- 10.0.0.1 ping statistics --- 00:20:47.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.317 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2217879 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2217879 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2217879 ']' 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.317 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 [2024-11-20 15:29:50.516133] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:47.318 [2024-11-20 15:29:50.516178] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.318 [2024-11-20 15:29:50.595316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.318 [2024-11-20 15:29:50.636778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.318 [2024-11-20 15:29:50.636816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.318 [2024-11-20 15:29:50.636824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.318 [2024-11-20 15:29:50.636832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.318 [2024-11-20 15:29:50.636840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.318 [2024-11-20 15:29:50.638535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.318 [2024-11-20 15:29:50.638644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.318 [2024-11-20 15:29:50.638749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.318 [2024-11-20 15:29:50.638750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.318 [2024-11-20 15:29:50.775731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.318 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.318 Malloc1 00:20:47.318 [2024-11-20 15:29:50.887706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.318 Malloc2 00:20:47.318 Malloc3 00:20:47.318 Malloc4 00:20:47.318 Malloc5 00:20:47.318 Malloc6 00:20:47.318 Malloc7 00:20:47.318 Malloc8 00:20:47.578 Malloc9 00:20:47.578 Malloc10 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2218151 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2218151 /var/tmp/bdevperf.sock 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2218151 ']' 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.578 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.578 { 00:20:47.578 "params": { 00:20:47.578 "name": "Nvme$subsystem", 00:20:47.578 "trtype": "$TEST_TRANSPORT", 00:20:47.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.578 "adrfam": "ipv4", 00:20:47.578 "trsvcid": "$NVMF_PORT", 00:20:47.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.578 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 [2024-11-20 15:29:51.369054] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:47.579 [2024-11-20 15:29:51.369102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:47.579 { 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme$subsystem", 00:20:47.579 "trtype": "$TEST_TRANSPORT", 00:20:47.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "$NVMF_PORT", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.579 "hdgst": ${hdgst:-false}, 00:20:47.579 "ddgst": ${ddgst:-false} 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 } 00:20:47.579 EOF 00:20:47.579 )") 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:47.579 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme1", 00:20:47.579 "trtype": "tcp", 00:20:47.579 "traddr": "10.0.0.2", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "4420", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.579 "hdgst": false, 00:20:47.579 "ddgst": false 00:20:47.579 }, 00:20:47.579 "method": "bdev_nvme_attach_controller" 00:20:47.579 },{ 00:20:47.579 "params": { 00:20:47.579 "name": "Nvme2", 00:20:47.579 "trtype": "tcp", 00:20:47.579 "traddr": "10.0.0.2", 00:20:47.579 "adrfam": "ipv4", 00:20:47.579 "trsvcid": "4420", 00:20:47.579 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.579 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.579 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme3", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme4", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme5", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme6", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme7", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme8", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme9", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 },{ 00:20:47.580 "params": { 00:20:47.580 "name": "Nvme10", 00:20:47.580 "trtype": "tcp", 00:20:47.580 "traddr": "10.0.0.2", 00:20:47.580 "adrfam": "ipv4", 00:20:47.580 "trsvcid": "4420", 00:20:47.580 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.580 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.580 "hdgst": false, 00:20:47.580 "ddgst": false 00:20:47.580 }, 00:20:47.580 "method": "bdev_nvme_attach_controller" 00:20:47.580 }' 00:20:47.580 [2024-11-20 15:29:51.444514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.839 [2024-11-20 15:29:51.486454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.745 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2218151 00:20:49.746 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:49.746 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:50.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2218151 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2217879 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.683 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 [2024-11-20 15:29:54.293972] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:50.684 [2024-11-20 15:29:54.294023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218635 ] 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.684 { 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme$subsystem", 00:20:50.684 "trtype": "$TEST_TRANSPORT", 00:20:50.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "$NVMF_PORT", 00:20:50.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.684 "hdgst": ${hdgst:-false}, 00:20:50.684 "ddgst": ${ddgst:-false} 00:20:50.684 }, 00:20:50.684 "method": "bdev_nvme_attach_controller" 00:20:50.684 } 00:20:50.684 EOF 00:20:50.684 )") 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:50.684 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:50.684 "params": { 00:20:50.684 "name": "Nvme1", 00:20:50.684 "trtype": "tcp", 00:20:50.684 "traddr": "10.0.0.2", 00:20:50.684 "adrfam": "ipv4", 00:20:50.684 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme2", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme3", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme4", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme5", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme6", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme7", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme8", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme9", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 },{ 00:20:50.685 "params": { 00:20:50.685 "name": "Nvme10", 00:20:50.685 "trtype": "tcp", 00:20:50.685 "traddr": "10.0.0.2", 00:20:50.685 "adrfam": "ipv4", 00:20:50.685 "trsvcid": "4420", 00:20:50.685 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:50.685 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:50.685 "hdgst": false, 00:20:50.685 "ddgst": false 00:20:50.685 }, 00:20:50.685 "method": "bdev_nvme_attach_controller" 00:20:50.685 }' 00:20:50.685 [2024-11-20 15:29:54.374321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.685 [2024-11-20 15:29:54.415823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.590 Running I/O for 1 seconds... 00:20:53.527 2194.00 IOPS, 137.12 MiB/s 00:20:53.527 Latency(us) 00:20:53.527 [2024-11-20T14:29:57.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.527 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme1n1 : 1.17 273.98 17.12 0.00 0.00 229724.34 16982.37 217921.45 00:20:53.527 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme2n1 : 1.05 243.03 15.19 0.00 0.00 256748.86 18122.13 227951.30 00:20:53.527 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme3n1 : 1.15 278.50 17.41 0.00 0.00 221164.23 16070.57 226127.69 00:20:53.527 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme4n1 : 1.15 283.54 17.72 0.00 0.00 209729.84 8548.17 218833.25 00:20:53.527 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme5n1 : 1.18 271.97 17.00 0.00 0.00 219038.05 19375.86 216097.84 00:20:53.527 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme6n1 : 1.16 280.94 17.56 0.00 0.00 209704.55 3932.16 218833.25 00:20:53.527 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme7n1 : 1.16 274.82 17.18 0.00 0.00 211701.94 17096.35 233422.14 00:20:53.527 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme8n1 : 1.21 267.51 16.72 0.00 0.00 207594.60 7408.42 221568.67 00:20:53.527 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme9n1 : 1.18 271.15 16.95 0.00 0.00 208589.33 15728.64 222480.47 00:20:53.527 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.527 Verification LBA range: start 0x0 length 0x400 00:20:53.527 Nvme10n1 : 1.18 272.14 17.01 0.00 0.00 204653.43 17552.25 238892.97 00:20:53.527 [2024-11-20T14:29:57.435Z] =================================================================================================================== 00:20:53.527 [2024-11-20T14:29:57.435Z] Total : 2717.59 169.85 0.00 0.00 217035.99 3932.16 238892.97 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.786 rmmod nvme_tcp 00:20:53.786 rmmod nvme_fabrics 00:20:53.786 rmmod nvme_keyring 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2217879 ']' 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2217879 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2217879 ']' 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2217879 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217879 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217879' 00:20:53.786 killing process with pid 2217879 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2217879 00:20:53.786 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2217879 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.355 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.260 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.260 00:20:56.260 real 0m15.584s 00:20:56.260 user 0m35.398s 00:20:56.260 sys 0m5.848s 00:20:56.260 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.260 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.260 ************************************ 00:20:56.260 END TEST nvmf_shutdown_tc1 00:20:56.260 ************************************ 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:56.261 ************************************ 00:20:56.261 START TEST nvmf_shutdown_tc2 00:20:56.261 ************************************ 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.261 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.261 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.261 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.262 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.262 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:20:56.521 00:20:56.521 --- 10.0.0.2 ping statistics --- 00:20:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.521 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:20:56.521 00:20:56.521 --- 10.0.0.1 ping statistics --- 00:20:56.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.521 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2219693 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2219693 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2219693 ']' 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.521 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 [2024-11-20 15:30:00.473230] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:56.779 [2024-11-20 15:30:00.473275] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.779 [2024-11-20 15:30:00.555876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.779 [2024-11-20 15:30:00.601006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.779 [2024-11-20 15:30:00.601041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.779 [2024-11-20 15:30:00.601048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.779 [2024-11-20 15:30:00.601054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.779 [2024-11-20 15:30:00.601059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.779 [2024-11-20 15:30:00.602481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.779 [2024-11-20 15:30:00.602586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.779 [2024-11-20 15:30:00.602692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.779 [2024-11-20 15:30:00.602693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.715 [2024-11-20 15:30:01.357631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.715 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.716 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.716 Malloc1 00:20:57.716 [2024-11-20 15:30:01.462213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.716 Malloc2 00:20:57.716 Malloc3 00:20:57.716 Malloc4 00:20:57.716 Malloc5 00:20:57.975 Malloc6 00:20:57.975 Malloc7 00:20:57.975 Malloc8 00:20:57.975 Malloc9 00:20:57.975 Malloc10 00:20:57.975 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.975 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:57.975 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.975 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2220067 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2220067 /var/tmp/bdevperf.sock 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2220067 ']' 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.235 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.235 { 00:20:58.235 "params": { 00:20:58.235 "name": "Nvme$subsystem", 00:20:58.235 "trtype": "$TEST_TRANSPORT", 00:20:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.235 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.236 [2024-11-20 15:30:01.938651] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:20:58.236 [2024-11-20 15:30:01.938701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220067 ] 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.236 { 00:20:58.236 "params": { 00:20:58.236 "name": "Nvme$subsystem", 00:20:58.236 "trtype": "$TEST_TRANSPORT", 00:20:58.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.236 "adrfam": "ipv4", 00:20:58.236 "trsvcid": "$NVMF_PORT", 00:20:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.236 "hdgst": ${hdgst:-false}, 00:20:58.236 "ddgst": ${ddgst:-false} 00:20:58.236 }, 00:20:58.236 "method": "bdev_nvme_attach_controller" 00:20:58.236 } 00:20:58.236 EOF 00:20:58.236 )") 00:20:58.236 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.237 { 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme$subsystem", 00:20:58.237 "trtype": "$TEST_TRANSPORT", 00:20:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "$NVMF_PORT", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.237 "hdgst": ${hdgst:-false}, 00:20:58.237 "ddgst": ${ddgst:-false} 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 } 00:20:58.237 EOF 00:20:58.237 )") 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.237 { 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme$subsystem", 00:20:58.237 "trtype": "$TEST_TRANSPORT", 00:20:58.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "$NVMF_PORT", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.237 "hdgst": ${hdgst:-false}, 00:20:58.237 "ddgst": ${ddgst:-false} 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 } 00:20:58.237 EOF 00:20:58.237 )") 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:58.237 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme1", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme2", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme3", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme4", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme5", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme6", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme7", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme8", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme9", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:58.237 "hdgst": false, 00:20:58.237 "ddgst": false 00:20:58.237 }, 00:20:58.237 "method": "bdev_nvme_attach_controller" 00:20:58.237 },{ 00:20:58.237 "params": { 00:20:58.237 "name": "Nvme10", 00:20:58.237 "trtype": "tcp", 00:20:58.237 "traddr": "10.0.0.2", 00:20:58.237 "adrfam": "ipv4", 00:20:58.237 "trsvcid": "4420", 00:20:58.237 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:58.237 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:58.238 "hdgst": false, 00:20:58.238 "ddgst": false 00:20:58.238 }, 00:20:58.238 "method": "bdev_nvme_attach_controller" 00:20:58.238 }' 00:20:58.238 [2024-11-20 15:30:02.015941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.238 [2024-11-20 15:30:02.057756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.618 Running I/O for 10 seconds... 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:00.186 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:00.187 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2220067 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2220067 ']' 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2220067 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220067 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220067' 00:21:00.446 killing process with pid 2220067 00:21:00.446 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2220067 00:21:00.447 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2220067 00:21:00.447 Received shutdown signal, test time was about 0.854794 seconds 00:21:00.447 00:21:00.447 Latency(us) 00:21:00.447 [2024-11-20T14:30:04.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme1n1 : 0.85 305.57 19.10 0.00 0.00 197115.65 26100.42 197861.73 00:21:00.447 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme2n1 : 0.82 312.73 19.55 0.00 0.00 197977.93 16754.42 216097.84 00:21:00.447 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme3n1 : 0.82 313.98 19.62 0.00 0.00 193129.96 16184.54 218833.25 00:21:00.447 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme4n1 : 0.81 320.52 20.03 0.00 0.00 184847.85 2806.65 207891.59 00:21:00.447 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme5n1 : 0.80 240.52 15.03 0.00 0.00 241341.37 25644.52 220656.86 00:21:00.447 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme6n1 : 0.80 239.81 14.99 0.00 0.00 236790.65 16982.37 224304.08 00:21:00.447 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme7n1 : 0.79 243.91 15.24 0.00 0.00 227069.48 14702.86 217009.64 00:21:00.447 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme8n1 : 0.78 251.12 15.70 0.00 0.00 213903.84 1752.38 218833.25 00:21:00.447 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme9n1 : 0.81 238.40 14.90 0.00 0.00 222511.64 19831.76 249834.63 00:21:00.447 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.447 Verification LBA range: start 0x0 length 0x400 00:21:00.447 Nvme10n1 : 0.81 237.81 14.86 0.00 0.00 217888.20 29405.72 231598.53 00:21:00.447 [2024-11-20T14:30:04.355Z] =================================================================================================================== 00:21:00.447 [2024-11-20T14:30:04.355Z] Total : 2704.37 169.02 0.00 0.00 210833.68 1752.38 249834.63 00:21:00.706 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:01.643 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2219693 00:21:01.643 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.644 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.644 rmmod nvme_tcp 00:21:01.903 rmmod nvme_fabrics 00:21:01.903 rmmod nvme_keyring 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2219693 ']' 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2219693 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2219693 ']' 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2219693 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219693 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219693' 00:21:01.903 killing process with pid 2219693 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2219693 00:21:01.903 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2219693 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.163 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.702 00:21:04.702 real 0m7.982s 00:21:04.702 user 0m24.264s 00:21:04.702 sys 0m1.353s 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.702 ************************************ 00:21:04.702 END TEST nvmf_shutdown_tc2 00:21:04.702 ************************************ 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:04.702 ************************************ 00:21:04.702 START TEST nvmf_shutdown_tc3 00:21:04.702 ************************************ 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.702 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.703 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.703 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:21:04.703 00:21:04.703 --- 10.0.0.2 ping statistics --- 00:21:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.703 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:21:04.703 00:21:04.703 --- 10.0.0.1 ping statistics --- 00:21:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.703 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.703 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2221461 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2221461 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2221461 ']' 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.704 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.704 [2024-11-20 15:30:08.550747] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:04.704 [2024-11-20 15:30:08.550802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.964 [2024-11-20 15:30:08.635405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.964 [2024-11-20 15:30:08.677984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.964 [2024-11-20 15:30:08.678022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.964 [2024-11-20 15:30:08.678029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.964 [2024-11-20 15:30:08.678036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.964 [2024-11-20 15:30:08.678051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.964 [2024-11-20 15:30:08.679726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.964 [2024-11-20 15:30:08.679832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.964 [2024-11-20 15:30:08.679940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.964 [2024-11-20 15:30:08.679940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 [2024-11-20 15:30:08.815828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.964 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:05.223 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:05.223 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.223 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.223 Malloc1 00:21:05.223 [2024-11-20 15:30:08.930080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.223 Malloc2 00:21:05.223 Malloc3 00:21:05.223 Malloc4 00:21:05.223 Malloc5 00:21:05.223 Malloc6 00:21:05.483 Malloc7 00:21:05.483 Malloc8 00:21:05.483 Malloc9 00:21:05.483 Malloc10 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2221795 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2221795 /var/tmp/bdevperf.sock 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2221795 ']' 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.483 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.483 { 00:21:05.483 "params": { 00:21:05.483 "name": "Nvme$subsystem", 00:21:05.483 "trtype": "$TEST_TRANSPORT", 00:21:05.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.483 "adrfam": "ipv4", 00:21:05.483 "trsvcid": "$NVMF_PORT", 00:21:05.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.484 "hdgst": ${hdgst:-false}, 00:21:05.484 "ddgst": ${ddgst:-false} 00:21:05.484 }, 00:21:05.484 "method": "bdev_nvme_attach_controller" 00:21:05.484 } 00:21:05.484 EOF 00:21:05.484 )") 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.484 { 00:21:05.484 "params": { 00:21:05.484 "name": "Nvme$subsystem", 00:21:05.484 "trtype": "$TEST_TRANSPORT", 00:21:05.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.484 "adrfam": "ipv4", 00:21:05.484 "trsvcid": "$NVMF_PORT", 00:21:05.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.484 "hdgst": ${hdgst:-false}, 00:21:05.484 "ddgst": ${ddgst:-false} 00:21:05.484 }, 00:21:05.484 "method": "bdev_nvme_attach_controller" 00:21:05.484 } 00:21:05.484 EOF 00:21:05.484 )") 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.484 { 00:21:05.484 "params": { 00:21:05.484 "name": "Nvme$subsystem", 00:21:05.484 "trtype": "$TEST_TRANSPORT", 00:21:05.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.484 "adrfam": "ipv4", 00:21:05.484 "trsvcid": "$NVMF_PORT", 00:21:05.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.484 "hdgst": ${hdgst:-false}, 00:21:05.484 "ddgst": ${ddgst:-false} 00:21:05.484 }, 00:21:05.484 "method": "bdev_nvme_attach_controller" 00:21:05.484 } 00:21:05.484 EOF 00:21:05.484 )") 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.484 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.484 { 00:21:05.484 "params": { 00:21:05.484 "name": "Nvme$subsystem", 00:21:05.484 "trtype": "$TEST_TRANSPORT", 00:21:05.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.484 "adrfam": "ipv4", 00:21:05.484 "trsvcid": "$NVMF_PORT", 00:21:05.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.484 "hdgst": ${hdgst:-false}, 00:21:05.484 "ddgst": ${ddgst:-false} 00:21:05.484 }, 00:21:05.484 "method": "bdev_nvme_attach_controller" 00:21:05.484 } 00:21:05.484 EOF 00:21:05.484 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 [2024-11-20 15:30:09.410825] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:05.742 [2024-11-20 15:30:09.410872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221795 ] 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.742 { 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme$subsystem", 00:21:05.742 "trtype": "$TEST_TRANSPORT", 00:21:05.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "$NVMF_PORT", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.742 "hdgst": ${hdgst:-false}, 00:21:05.742 "ddgst": ${ddgst:-false} 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 } 00:21:05.742 EOF 00:21:05.742 )") 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:05.742 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme1", 00:21:05.742 "trtype": "tcp", 00:21:05.742 "traddr": "10.0.0.2", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "4420", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.742 "hdgst": false, 00:21:05.742 "ddgst": false 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 },{ 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme2", 00:21:05.742 "trtype": "tcp", 00:21:05.742 "traddr": "10.0.0.2", 00:21:05.742 "adrfam": "ipv4", 00:21:05.742 "trsvcid": "4420", 00:21:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.742 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.742 "hdgst": false, 00:21:05.742 "ddgst": false 00:21:05.742 }, 00:21:05.742 "method": "bdev_nvme_attach_controller" 00:21:05.742 },{ 00:21:05.742 "params": { 00:21:05.742 "name": "Nvme3", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme4", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme5", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme6", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme7", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme8", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme9", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 },{ 00:21:05.743 "params": { 00:21:05.743 "name": "Nvme10", 00:21:05.743 "trtype": "tcp", 00:21:05.743 "traddr": "10.0.0.2", 00:21:05.743 "adrfam": "ipv4", 00:21:05.743 "trsvcid": "4420", 00:21:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.743 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.743 "hdgst": false, 00:21:05.743 "ddgst": false 00:21:05.743 }, 00:21:05.743 "method": "bdev_nvme_attach_controller" 00:21:05.743 }' 00:21:05.743 [2024-11-20 15:30:09.485024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.743 [2024-11-20 15:30:09.526449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.648 Running I/O for 10 seconds... 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:07.648 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2221461 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2221461 ']' 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2221461 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221461 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221461' 00:21:07.924 killing process with pid 2221461 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2221461 00:21:07.924 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2221461 00:21:07.924 [2024-11-20 15:30:11.699239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.924 [2024-11-20 15:30:11.699402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.699710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16db700 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.700937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.700977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.700986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.700993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.700999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.925 [2024-11-20 15:30:11.701224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.701379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853290 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.703671] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.926 [2024-11-20 15:30:11.704180] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.926 [2024-11-20 15:30:11.708117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.926 [2024-11-20 15:30:11.708507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.708513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.708519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.708525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.708531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.708537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dbbf0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.709995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc0c0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.710993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.927 [2024-11-20 15:30:11.711087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.711392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc5b0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.712004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc930 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.712026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc930 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.712033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc930 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.928 [2024-11-20 15:30:11.713432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.713643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd2d0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.929 [2024-11-20 15:30:11.714729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.714862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd7c0 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.930 [2024-11-20 15:30:11.715859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddc90 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659d50 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa854c0 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca140 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad0c40 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97590 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.724934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.724992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.724999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e610 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.725023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b300 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.725109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a1b0 is same with the state(6) to be set 00:21:07.931 [2024-11-20 15:30:11.725193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.931 [2024-11-20 15:30:11.725239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.931 [2024-11-20 15:30:11.725247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657c70 is same with the state(6) to be set 00:21:07.932 [2024-11-20 15:30:11.725276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.932 [2024-11-20 15:30:11.725285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.932 [2024-11-20 15:30:11.725300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.932 [2024-11-20 15:30:11.725320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.932 [2024-11-20 15:30:11.725335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa847a0 is same with the state(6) to be set 00:21:07.932 [2024-11-20 15:30:11.725780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.725990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.725997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.932 [2024-11-20 15:30:11.726349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.932 [2024-11-20 15:30:11.726357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.933 [2024-11-20 15:30:11.726942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.726987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.727004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.933 [2024-11-20 15:30:11.727011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.933 [2024-11-20 15:30:11.727019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.934 [2024-11-20 15:30:11.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.934 [2024-11-20 15:30:11.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.935 [2024-11-20 15:30:11.727967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.935 [2024-11-20 15:30:11.727974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5f450 is same with the state(6) to be set 00:21:07.935 [2024-11-20 15:30:11.730276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.935 [2024-11-20 15:30:11.730302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.935 [2024-11-20 15:30:11.730316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b300 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.730328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa854c0 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.731278] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.731331] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.731363] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.731568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.935 [2024-11-20 15:30:11.731584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa854c0 with addr=10.0.0.2, port=4420 00:21:07.935 [2024-11-20 15:30:11.731592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa854c0 is same with the state(6) to be set 00:21:07.935 [2024-11-20 15:30:11.731815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.935 [2024-11-20 15:30:11.731828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7b300 with addr=10.0.0.2, port=4420 00:21:07.935 [2024-11-20 15:30:11.731836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b300 is same with the state(6) to be set 00:21:07.935 [2024-11-20 15:30:11.731882] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.731926] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.731981] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.935 [2024-11-20 15:30:11.732041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa854c0 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.732054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b300 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.732123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.935 [2024-11-20 15:30:11.732133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.935 [2024-11-20 15:30:11.732142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.935 [2024-11-20 15:30:11.732151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.935 [2024-11-20 15:30:11.732159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.935 [2024-11-20 15:30:11.732167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.935 [2024-11-20 15:30:11.732175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.935 [2024-11-20 15:30:11.732181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:07.935 [2024-11-20 15:30:11.734479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659d50 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaca140 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0c40 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa97590 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e610 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a1b0 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x657c70 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.734603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa847a0 (9): Bad file descriptor 00:21:07.935 [2024-11-20 15:30:11.740754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.935 [2024-11-20 15:30:11.740773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.935 [2024-11-20 15:30:11.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.935 [2024-11-20 15:30:11.741077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7b300 with addr=10.0.0.2, port=4420 00:21:07.935 [2024-11-20 15:30:11.741086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b300 is same with the state(6) to be set 00:21:07.935 [2024-11-20 15:30:11.741308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.935 [2024-11-20 15:30:11.741320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa854c0 with addr=10.0.0.2, port=4420 00:21:07.935 [2024-11-20 15:30:11.741329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa854c0 is same with the state(6) to be set 00:21:07.935 [2024-11-20 15:30:11.741368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b300 (9): Bad file descriptor 00:21:07.936 [2024-11-20 15:30:11.741380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa854c0 (9): Bad file descriptor 00:21:07.936 [2024-11-20 15:30:11.741419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.936 [2024-11-20 15:30:11.741428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.936 [2024-11-20 15:30:11.741436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.936 [2024-11-20 15:30:11.741444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:07.936 [2024-11-20 15:30:11.741452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.936 [2024-11-20 15:30:11.741459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.936 [2024-11-20 15:30:11.741466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.936 [2024-11-20 15:30:11.741472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.936 [2024-11-20 15:30:11.744648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.744985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.744993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.936 [2024-11-20 15:30:11.745225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.936 [2024-11-20 15:30:11.745234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.745677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.745685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e4d0 is same with the state(6) to be set 00:21:07.937 [2024-11-20 15:30:11.746716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.937 [2024-11-20 15:30:11.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.937 [2024-11-20 15:30:11.746869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.746985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.746994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.938 [2024-11-20 15:30:11.747438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.938 [2024-11-20 15:30:11.747448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.747739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.747746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f6a0 is same with the state(6) to be set 00:21:07.939 [2024-11-20 15:30:11.748766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.748986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.748994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.939 [2024-11-20 15:30:11.749098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.939 [2024-11-20 15:30:11.749107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.940 [2024-11-20 15:30:11.749707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.940 [2024-11-20 15:30:11.749714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.749723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.749730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.749738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.749745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.749754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.749761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.749769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.749776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.749784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4f6e0 is same with the state(6) to be set 00:21:07.941 [2024-11-20 15:30:11.750802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.750985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.750995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.941 [2024-11-20 15:30:11.751354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.941 [2024-11-20 15:30:11.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.751566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.751573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.757588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.757596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5ca10 is same with the state(6) to be set 00:21:07.942 [2024-11-20 15:30:11.758811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.758827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.758848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.758861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.758869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.942 [2024-11-20 15:30:11.758886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.942 [2024-11-20 15:30:11.758895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.758989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.758998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.943 [2024-11-20 15:30:11.759520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.943 [2024-11-20 15:30:11.759527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.759845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.759853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa609d0 is same with the state(6) to be set 00:21:07.944 [2024-11-20 15:30:11.760862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.760992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.760999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.944 [2024-11-20 15:30:11.761165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.944 [2024-11-20 15:30:11.761174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.945 [2024-11-20 15:30:11.761798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.945 [2024-11-20 15:30:11.761805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.761902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.761910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a9410 is same with the state(6) to be set 00:21:07.946 [2024-11-20 15:30:11.762905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.762921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.762931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.762939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.762952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.762960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.762969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.762977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.762985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.762992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.946 [2024-11-20 15:30:11.763399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.946 [2024-11-20 15:30:11.763410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.763945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.763957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e3a60 is same with the state(6) to be set 00:21:07.947 [2024-11-20 15:30:11.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.764983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.764995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.765003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.765012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.947 [2024-11-20 15:30:11.765020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.947 [2024-11-20 15:30:11.765029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.948 [2024-11-20 15:30:11.765668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.948 [2024-11-20 15:30:11.765675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.765986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.765995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.766002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.766011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.949 [2024-11-20 15:30:11.766018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.949 [2024-11-20 15:30:11.766026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4f30 is same with the state(6) to be set 00:21:07.949 [2024-11-20 15:30:11.767007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767117] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:07.949 [2024-11-20 15:30:11.767131] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:07.949 [2024-11-20 15:30:11.767142] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:07.949 [2024-11-20 15:30:11.767152] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:07.949 [2024-11-20 15:30:11.767229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:07.949 [2024-11-20 15:30:11.767249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:07.949 task offset: 20992 on job bdev=Nvme5n1 fails 00:21:07.949 00:21:07.949 Latency(us) 00:21:07.949 [2024-11-20T14:30:11.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme1n1 ended in about 0.65 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme1n1 : 0.65 195.64 12.23 97.82 0.00 214716.77 20971.52 197861.73 00:21:07.949 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme2n1 ended in about 0.66 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme2n1 : 0.66 195.03 12.19 97.51 0.00 209974.32 23251.03 217009.64 00:21:07.949 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme3n1 ended in about 0.66 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme3n1 : 0.66 194.42 12.15 97.21 0.00 205324.76 16412.49 222480.47 00:21:07.949 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme4n1 ended in about 0.67 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme4n1 : 0.67 192.15 12.01 96.07 0.00 202655.98 13620.09 218833.25 00:21:07.949 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme5n1 ended in about 0.64 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme5n1 : 0.64 200.96 12.56 100.48 0.00 187594.54 2835.14 223392.28 00:21:07.949 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme6n1 ended in about 0.64 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme6n1 : 0.64 200.68 12.54 100.34 0.00 182675.96 5242.88 219745.06 00:21:07.949 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme7n1 ended in about 0.67 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme7n1 : 0.67 198.98 12.44 95.75 0.00 182791.49 16412.49 175066.60 00:21:07.949 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme8n1 ended in about 0.67 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme8n1 : 0.67 198.37 12.40 95.46 0.00 178276.43 14930.81 198773.54 00:21:07.949 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme9n1 ended in about 0.67 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme9n1 : 0.67 95.17 5.95 95.17 0.00 267664.70 18805.98 253481.85 00:21:07.949 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.949 Job: Nvme10n1 ended in about 0.67 seconds with error 00:21:07.949 Verification LBA range: start 0x0 length 0x400 00:21:07.949 Nvme10n1 : 0.67 94.88 5.93 94.88 0.00 260762.49 27240.18 238892.97 00:21:07.949 [2024-11-20T14:30:11.857Z] =================================================================================================================== 00:21:07.949 [2024-11-20T14:30:11.857Z] Total : 1766.28 110.39 970.69 0.00 205179.79 2835.14 253481.85 00:21:07.950 [2024-11-20 15:30:11.797628] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.950 [2024-11-20 15:30:11.797679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:07.950 [2024-11-20 15:30:11.797993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.798016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a1b0 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.798026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a1b0 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.798201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.798216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x659d50 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.798224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659d50 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.798415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.798428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x657c70 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.798436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657c70 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.798546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.798558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa847a0 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.798567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa847a0 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.800676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.800699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x56e610 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.800708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e610 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.800903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.800914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa97590 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.800922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa97590 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.801134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.801146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaca140 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.801154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca140 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.801308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.801320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad0c40 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.801327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad0c40 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.801341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a1b0 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659d50 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x657c70 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa847a0 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801410] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801424] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801439] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801450] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801460] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801472] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:07.950 [2024-11-20 15:30:11.801540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.950 [2024-11-20 15:30:11.801554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.950 [2024-11-20 15:30:11.801586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e610 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa97590 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaca140 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0c40 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.801630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.801637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.801646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.801657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.801666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.801672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.801679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.801686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.801693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.801701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.801708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.801715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.801724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.801731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.801737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.801743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.801997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.802014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa854c0 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.802022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa854c0 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.802244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.950 [2024-11-20 15:30:11.802256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7b300 with addr=10.0.0.2, port=4420 00:21:07.950 [2024-11-20 15:30:11.802264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b300 is same with the state(6) to be set 00:21:07.950 [2024-11-20 15:30:11.802272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.802292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.802303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.802323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.802330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.802351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.802359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.802378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.802405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa854c0 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.802416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b300 (9): Bad file descriptor 00:21:07.950 [2024-11-20 15:30:11.802442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.950 [2024-11-20 15:30:11.802462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.950 [2024-11-20 15:30:11.802470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.950 [2024-11-20 15:30:11.802477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.950 [2024-11-20 15:30:11.802483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.951 [2024-11-20 15:30:11.802490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.210 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2221795 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2221795 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2221795 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.590 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.591 rmmod nvme_tcp 00:21:09.591 rmmod nvme_fabrics 00:21:09.591 rmmod nvme_keyring 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2221461 ']' 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2221461 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2221461 ']' 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2221461 00:21:09.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2221461) - No such process 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2221461 is not found' 00:21:09.591 Process with pid 2221461 is not found 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.591 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.497 00:21:11.497 real 0m7.089s 00:21:11.497 user 0m16.115s 00:21:11.497 sys 0m1.251s 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.497 ************************************ 00:21:11.497 END TEST nvmf_shutdown_tc3 00:21:11.497 ************************************ 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:11.497 ************************************ 00:21:11.497 START TEST nvmf_shutdown_tc4 00:21:11.497 ************************************ 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.497 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.498 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.498 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.498 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.758 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:21:11.759 00:21:11.759 --- 10.0.0.2 ping statistics --- 00:21:11.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.759 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:21:11.759 00:21:11.759 --- 10.0.0.1 ping statistics --- 00:21:11.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.759 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2223034 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2223034 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2223034 ']' 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.759 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.018 [2024-11-20 15:30:15.702184] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:12.018 [2024-11-20 15:30:15.702234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.018 [2024-11-20 15:30:15.779416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.018 [2024-11-20 15:30:15.821919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.018 [2024-11-20 15:30:15.821959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.018 [2024-11-20 15:30:15.821966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.018 [2024-11-20 15:30:15.821976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.018 [2024-11-20 15:30:15.821981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.018 [2024-11-20 15:30:15.823643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.018 [2024-11-20 15:30:15.823747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.018 [2024-11-20 15:30:15.823854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.018 [2024-11-20 15:30:15.823855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.956 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.956 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.957 [2024-11-20 15:30:16.595126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.957 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.957 Malloc1 00:21:12.957 [2024-11-20 15:30:16.701511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.957 Malloc2 00:21:12.957 Malloc3 00:21:12.957 Malloc4 00:21:12.957 Malloc5 00:21:13.215 Malloc6 00:21:13.215 Malloc7 00:21:13.215 Malloc8 00:21:13.215 Malloc9 00:21:13.215 Malloc10 00:21:13.215 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.215 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:13.216 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.216 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:13.474 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2223317 00:21:13.474 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:13.474 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:13.474 [2024-11-20 15:30:17.205301] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:18.754 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.754 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2223034 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2223034 ']' 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2223034 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2223034 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2223034' 00:21:18.755 killing process with pid 2223034 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2223034 00:21:18.755 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2223034 00:21:18.755 [2024-11-20 15:30:22.194854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.194952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718a40 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.195163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718f10 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.195197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718f10 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.195206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718f10 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.195214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718f10 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.196363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.196389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.196406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.196420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with tstarting I/O failed: -6 00:21:18.755 he state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7193e0 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 [2024-11-20 15:30:22.196933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718570 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718570 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.196976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718570 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.197832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x719da0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.197853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x719da0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.198468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71a270 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 starting I/O failed: -6 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.199225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.755 Write completed with error (sct=0, sc=8) 00:21:18.755 [2024-11-20 15:30:22.199246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.755 starting I/O failed: -6 00:21:18.755 [2024-11-20 15:30:22.199254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.755 [2024-11-20 15:30:22.199262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 [2024-11-20 15:30:22.199269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.756 [2024-11-20 15:30:22.199276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 [2024-11-20 15:30:22.199282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.756 [2024-11-20 15:30:22.199289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7198d0 is same with the state(6) to be set 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 [2024-11-20 15:30:22.201081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.756 NVMe io qpair process completion error 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.756 starting I/O failed: -6 00:21:18.756 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 [2024-11-20 15:30:22.202108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 [2024-11-20 15:30:22.203018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.757 Write completed with error (sct=0, sc=8) 00:21:18.757 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 [2024-11-20 15:30:22.204019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 [2024-11-20 15:30:22.205566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.758 NVMe io qpair process completion error 00:21:18.758 [2024-11-20 15:30:22.207511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7adc80 is same with the state(6) to be set 00:21:18.758 [2024-11-20 15:30:22.207536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7adc80 is same with the state(6) to be set 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 [2024-11-20 15:30:22.208182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.758 Write completed with error (sct=0, sc=8) 00:21:18.758 starting I/O failed: -6 00:21:18.758 [2024-11-20 15:30:22.208206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.758 [2024-11-20 15:30:22.208215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.758 he state(6) to be set 00:21:18.758 [2024-11-20 15:30:22.208224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.758 [2024-11-20 15:30:22.208230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.758 he state(6) to be set 00:21:18.758 [2024-11-20 15:30:22.208238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 [2024-11-20 15:30:22.208250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 [2024-11-20 15:30:22.208263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 starting I/O failed: -6 00:21:18.759 [2024-11-20 15:30:22.208269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae620 is same with the state(6) to be set 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 starting I/O failed: -6 00:21:18.759 [2024-11-20 15:30:22.208583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 [2024-11-20 15:30:22.208656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ad7b0 is same with the state(6) to be set 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.759 starting I/O failed: -6 00:21:18.759 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 [2024-11-20 15:30:22.210222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.760 NVMe io qpair process completion error 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 [2024-11-20 15:30:22.211234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 [2024-11-20 15:30:22.212126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.760 starting I/O failed: -6 00:21:18.760 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 [2024-11-20 15:30:22.213165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.761 starting I/O failed: -6 00:21:18.761 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 [2024-11-20 15:30:22.214708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.762 NVMe io qpair process completion error 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 [2024-11-20 15:30:22.215733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.762 Write completed with error (sct=0, sc=8) 00:21:18.762 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 [2024-11-20 15:30:22.216657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 [2024-11-20 15:30:22.217689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.763 starting I/O failed: -6 00:21:18.763 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 [2024-11-20 15:30:22.219514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.764 NVMe io qpair process completion error 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 [2024-11-20 15:30:22.220555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 [2024-11-20 15:30:22.221487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 starting I/O failed: -6 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.764 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 [2024-11-20 15:30:22.222483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 [2024-11-20 15:30:22.228832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.765 NVMe io qpair process completion error 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 starting I/O failed: -6 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.765 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 [2024-11-20 15:30:22.229874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 [2024-11-20 15:30:22.230759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.766 starting I/O failed: -6 00:21:18.766 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 [2024-11-20 15:30:22.231814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 [2024-11-20 15:30:22.233460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.767 NVMe io qpair process completion error 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 starting I/O failed: -6 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.767 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 [2024-11-20 15:30:22.234437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 [2024-11-20 15:30:22.235302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 starting I/O failed: -6 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.768 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 [2024-11-20 15:30:22.236374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 [2024-11-20 15:30:22.238264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.769 NVMe io qpair process completion error 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 starting I/O failed: -6 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.769 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 [2024-11-20 15:30:22.239184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 [2024-11-20 15:30:22.240031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 starting I/O failed: -6 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.770 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 [2024-11-20 15:30:22.241071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 starting I/O failed: -6 00:21:18.771 [2024-11-20 15:30:22.246274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.771 NVMe io qpair process completion error 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.771 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 [2024-11-20 15:30:22.255755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.772 NVMe io qpair process completion error 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.772 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Write completed with error (sct=0, sc=8) 00:21:18.773 Initializing NVMe Controllers 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:18.773 Controller IO queue size 128, less than required. 00:21:18.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:18.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:18.773 Initialization complete. Launching workers. 00:21:18.773 ======================================================== 00:21:18.773 Latency(us) 00:21:18.773 Device Information : IOPS MiB/s Average min max 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2169.56 93.22 59003.09 736.70 112958.13 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2151.50 92.45 59510.84 913.12 142537.72 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2162.90 92.94 59209.80 705.11 117503.93 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2151.50 92.45 59522.10 519.52 122971.79 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2116.66 90.95 60205.29 441.12 108094.70 00:21:18.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2155.15 92.60 58805.14 960.49 106523.97 00:21:18.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2152.79 92.50 59198.32 457.15 106813.41 00:21:18.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2142.68 92.07 59169.45 894.04 106544.12 00:21:18.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2089.78 89.80 60678.14 924.75 107695.62 00:21:18.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2135.59 91.76 59390.35 938.46 107399.79 00:21:18.774 ======================================================== 00:21:18.774 Total : 21428.10 920.74 59464.10 441.12 142537.72 00:21:18.774 00:21:18.774 [2024-11-20 15:30:22.261604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db890 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbef0 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc410 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd720 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd900 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ddae0 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc740 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbbc0 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db560 is same with the state(6) to be set 00:21:18.774 [2024-11-20 15:30:22.261899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dca70 is same with the state(6) to be set 00:21:18.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:18.774 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2223317 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2223317 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2223317 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:19.809 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.810 rmmod nvme_tcp 00:21:19.810 rmmod nvme_fabrics 00:21:19.810 rmmod nvme_keyring 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2223034 ']' 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2223034 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2223034 ']' 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2223034 00:21:19.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2223034) - No such process 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2223034 is not found' 00:21:19.810 Process with pid 2223034 is not found 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.810 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:22.346 00:21:22.346 real 0m10.408s 00:21:22.346 user 0m27.519s 00:21:22.346 sys 0m5.302s 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 ************************************ 00:21:22.346 END TEST nvmf_shutdown_tc4 00:21:22.346 ************************************ 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:22.346 00:21:22.346 real 0m41.583s 00:21:22.346 user 1m43.540s 00:21:22.346 sys 0m14.066s 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.346 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 ************************************ 00:21:22.347 END TEST nvmf_shutdown 00:21:22.347 ************************************ 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.347 ************************************ 00:21:22.347 START TEST nvmf_nsid 00:21:22.347 ************************************ 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:22.347 * Looking for test storage... 00:21:22.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:22.347 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:22.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.347 --rc genhtml_branch_coverage=1 00:21:22.347 --rc genhtml_function_coverage=1 00:21:22.347 --rc genhtml_legend=1 00:21:22.347 --rc geninfo_all_blocks=1 00:21:22.347 --rc geninfo_unexecuted_blocks=1 00:21:22.347 00:21:22.347 ' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:22.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.347 --rc genhtml_branch_coverage=1 00:21:22.347 --rc genhtml_function_coverage=1 00:21:22.347 --rc genhtml_legend=1 00:21:22.347 --rc geninfo_all_blocks=1 00:21:22.347 --rc geninfo_unexecuted_blocks=1 00:21:22.347 00:21:22.347 ' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:22.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.347 --rc genhtml_branch_coverage=1 00:21:22.347 --rc genhtml_function_coverage=1 00:21:22.347 --rc genhtml_legend=1 00:21:22.347 --rc geninfo_all_blocks=1 00:21:22.347 --rc geninfo_unexecuted_blocks=1 00:21:22.347 00:21:22.347 ' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:22.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.347 --rc genhtml_branch_coverage=1 00:21:22.347 --rc genhtml_function_coverage=1 00:21:22.347 --rc genhtml_legend=1 00:21:22.347 --rc geninfo_all_blocks=1 00:21:22.347 --rc geninfo_unexecuted_blocks=1 00:21:22.347 00:21:22.347 ' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.347 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.348 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.915 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.916 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.916 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:21:28.916 00:21:28.916 --- 10.0.0.2 ping statistics --- 00:21:28.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.916 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:28.916 00:21:28.916 --- 10.0.0.1 ping statistics --- 00:21:28.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.916 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.916 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.917 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2227788 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2227788 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2227788 ']' 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.917 [2024-11-20 15:30:32.052612] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:28.917 [2024-11-20 15:30:32.052660] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.917 [2024-11-20 15:30:32.132976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.917 [2024-11-20 15:30:32.173922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.917 [2024-11-20 15:30:32.173963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.917 [2024-11-20 15:30:32.173971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.917 [2024-11-20 15:30:32.173977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.917 [2024-11-20 15:30:32.173982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.917 [2024-11-20 15:30:32.174543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2227920 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3ace2a1d-b2cc-42da-9c59-686acf5cf8d6 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fcb11242-8f76-42f5-afd7-9c17cf02d2bd 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=292e17e9-7b6d-4070-a349-0d7c14d6236d 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.917 null0 00:21:28.917 null1 00:21:28.917 null2 00:21:28.917 [2024-11-20 15:30:32.357781] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:28.917 [2024-11-20 15:30:32.357824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227920 ] 00:21:28.917 [2024-11-20 15:30:32.361365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.917 [2024-11-20 15:30:32.385551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2227920 /var/tmp/tgt2.sock 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2227920 ']' 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:28.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.917 [2024-11-20 15:30:32.431671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.917 [2024-11-20 15:30:32.478337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:28.917 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:29.175 [2024-11-20 15:30:32.998829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.175 [2024-11-20 15:30:33.014958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:29.175 nvme0n1 nvme0n2 00:21:29.175 nvme1n1 00:21:29.175 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:29.175 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:29.175 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:30.550 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3ace2a1d-b2cc-42da-9c59-686acf5cf8d6 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3ace2a1db2cc42da9c59686acf5cf8d6 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3ACE2A1DB2CC42DA9C59686ACF5CF8D6 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3ACE2A1DB2CC42DA9C59686ACF5CF8D6 == \3\A\C\E\2\A\1\D\B\2\C\C\4\2\D\A\9\C\5\9\6\8\6\A\C\F\5\C\F\8\D\6 ]] 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fcb11242-8f76-42f5-afd7-9c17cf02d2bd 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fcb112428f7642f5afd79c17cf02d2bd 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FCB112428F7642F5AFD79C17CF02D2BD 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FCB112428F7642F5AFD79C17CF02D2BD == \F\C\B\1\1\2\4\2\8\F\7\6\4\2\F\5\A\F\D\7\9\C\1\7\C\F\0\2\D\2\B\D ]] 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 292e17e9-7b6d-4070-a349-0d7c14d6236d 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:31.486 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:31.487 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.487 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=292e17e97b6d4070a3490d7c14d6236d 00:21:31.487 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 292E17E97B6D4070A3490D7C14D6236D 00:21:31.487 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 292E17E97B6D4070A3490D7C14D6236D == \2\9\2\E\1\7\E\9\7\B\6\D\4\0\7\0\A\3\4\9\0\D\7\C\1\4\D\6\2\3\6\D ]] 00:21:31.487 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2227920 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2227920 ']' 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2227920 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227920 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227920' 00:21:31.745 killing process with pid 2227920 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2227920 00:21:31.745 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2227920 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.004 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.004 rmmod nvme_tcp 00:21:32.263 rmmod nvme_fabrics 00:21:32.263 rmmod nvme_keyring 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2227788 ']' 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2227788 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2227788 ']' 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2227788 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.263 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227788 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227788' 00:21:32.263 killing process with pid 2227788 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2227788 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2227788 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.263 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.522 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.522 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.522 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.426 00:21:34.426 real 0m12.367s 00:21:34.426 user 0m9.719s 00:21:34.426 sys 0m5.425s 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.426 ************************************ 00:21:34.426 END TEST nvmf_nsid 00:21:34.426 ************************************ 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:34.426 00:21:34.426 real 11m57.522s 00:21:34.426 user 25m35.734s 00:21:34.426 sys 3m40.107s 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.426 15:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.426 ************************************ 00:21:34.426 END TEST nvmf_target_extra 00:21:34.426 ************************************ 00:21:34.426 15:30:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.426 15:30:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.426 15:30:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.426 15:30:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.685 ************************************ 00:21:34.685 START TEST nvmf_host 00:21:34.685 ************************************ 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.685 * Looking for test storage... 00:21:34.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.685 --rc genhtml_branch_coverage=1 00:21:34.685 --rc genhtml_function_coverage=1 00:21:34.685 --rc genhtml_legend=1 00:21:34.685 --rc geninfo_all_blocks=1 00:21:34.685 --rc geninfo_unexecuted_blocks=1 00:21:34.685 00:21:34.685 ' 00:21:34.685 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.686 --rc genhtml_branch_coverage=1 00:21:34.686 --rc genhtml_function_coverage=1 00:21:34.686 --rc genhtml_legend=1 00:21:34.686 --rc geninfo_all_blocks=1 00:21:34.686 --rc geninfo_unexecuted_blocks=1 00:21:34.686 00:21:34.686 ' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.686 --rc genhtml_branch_coverage=1 00:21:34.686 --rc genhtml_function_coverage=1 00:21:34.686 --rc genhtml_legend=1 00:21:34.686 --rc geninfo_all_blocks=1 00:21:34.686 --rc geninfo_unexecuted_blocks=1 00:21:34.686 00:21:34.686 ' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.686 --rc genhtml_branch_coverage=1 00:21:34.686 --rc genhtml_function_coverage=1 00:21:34.686 --rc genhtml_legend=1 00:21:34.686 --rc geninfo_all_blocks=1 00:21:34.686 --rc geninfo_unexecuted_blocks=1 00:21:34.686 00:21:34.686 ' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.686 ************************************ 00:21:34.686 START TEST nvmf_multicontroller 00:21:34.686 ************************************ 00:21:34.686 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.946 * Looking for test storage... 00:21:34.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.946 --rc genhtml_branch_coverage=1 00:21:34.946 --rc genhtml_function_coverage=1 00:21:34.946 --rc genhtml_legend=1 00:21:34.946 --rc geninfo_all_blocks=1 00:21:34.946 --rc geninfo_unexecuted_blocks=1 00:21:34.946 00:21:34.946 ' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.946 --rc genhtml_branch_coverage=1 00:21:34.946 --rc genhtml_function_coverage=1 00:21:34.946 --rc genhtml_legend=1 00:21:34.946 --rc geninfo_all_blocks=1 00:21:34.946 --rc geninfo_unexecuted_blocks=1 00:21:34.946 00:21:34.946 ' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.946 --rc genhtml_branch_coverage=1 00:21:34.946 --rc genhtml_function_coverage=1 00:21:34.946 --rc genhtml_legend=1 00:21:34.946 --rc geninfo_all_blocks=1 00:21:34.946 --rc geninfo_unexecuted_blocks=1 00:21:34.946 00:21:34.946 ' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.946 --rc genhtml_branch_coverage=1 00:21:34.946 --rc genhtml_function_coverage=1 00:21:34.946 --rc genhtml_legend=1 00:21:34.946 --rc geninfo_all_blocks=1 00:21:34.946 --rc geninfo_unexecuted_blocks=1 00:21:34.946 00:21:34.946 ' 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.946 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.947 15:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:41.516 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.517 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.517 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:21:41.517 00:21:41.517 --- 10.0.0.2 ping statistics --- 00:21:41.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.517 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:21:41.517 00:21:41.517 --- 10.0.0.1 ping statistics --- 00:21:41.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.517 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:41.517 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2232124 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2232124 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2232124 ']' 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 [2024-11-20 15:30:44.779944] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:41.518 [2024-11-20 15:30:44.780005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.518 [2024-11-20 15:30:44.860180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.518 [2024-11-20 15:30:44.902776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.518 [2024-11-20 15:30:44.902814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.518 [2024-11-20 15:30:44.902821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.518 [2024-11-20 15:30:44.902827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.518 [2024-11-20 15:30:44.902832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.518 [2024-11-20 15:30:44.904112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.518 [2024-11-20 15:30:44.904223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.518 [2024-11-20 15:30:44.904224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.518 15:30:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 [2024-11-20 15:30:45.040599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 Malloc0 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 [2024-11-20 15:30:45.113258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 [2024-11-20 15:30:45.121201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 Malloc1 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2232240 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.518 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2232240 /var/tmp/bdevperf.sock 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2232240 ']' 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.519 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.779 NVMe0n1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.779 1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.779 request: 00:21:41.779 { 00:21:41.779 "name": "NVMe0", 00:21:41.779 "trtype": "tcp", 00:21:41.779 "traddr": "10.0.0.2", 00:21:41.779 "adrfam": "ipv4", 00:21:41.779 "trsvcid": "4420", 00:21:41.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.779 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:41.779 "hostaddr": "10.0.0.1", 00:21:41.779 "prchk_reftag": false, 00:21:41.779 "prchk_guard": false, 00:21:41.779 "hdgst": false, 00:21:41.779 "ddgst": false, 00:21:41.779 "allow_unrecognized_csi": false, 00:21:41.779 "method": "bdev_nvme_attach_controller", 00:21:41.779 "req_id": 1 00:21:41.779 } 00:21:41.779 Got JSON-RPC error response 00:21:41.779 response: 00:21:41.779 { 00:21:41.779 "code": -114, 00:21:41.779 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:41.779 } 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.779 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.039 request: 00:21:42.039 { 00:21:42.039 "name": "NVMe0", 00:21:42.039 "trtype": "tcp", 00:21:42.039 "traddr": "10.0.0.2", 00:21:42.039 "adrfam": "ipv4", 00:21:42.039 "trsvcid": "4420", 00:21:42.039 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.039 "hostaddr": "10.0.0.1", 00:21:42.039 "prchk_reftag": false, 00:21:42.039 "prchk_guard": false, 00:21:42.039 "hdgst": false, 00:21:42.039 "ddgst": false, 00:21:42.039 "allow_unrecognized_csi": false, 00:21:42.039 "method": "bdev_nvme_attach_controller", 00:21:42.039 "req_id": 1 00:21:42.039 } 00:21:42.039 Got JSON-RPC error response 00:21:42.039 response: 00:21:42.039 { 00:21:42.039 "code": -114, 00:21:42.039 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:42.039 } 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.039 request: 00:21:42.039 { 00:21:42.039 "name": "NVMe0", 00:21:42.039 "trtype": "tcp", 00:21:42.039 "traddr": "10.0.0.2", 00:21:42.039 "adrfam": "ipv4", 00:21:42.039 "trsvcid": "4420", 00:21:42.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.039 "hostaddr": "10.0.0.1", 00:21:42.039 "prchk_reftag": false, 00:21:42.039 "prchk_guard": false, 00:21:42.039 "hdgst": false, 00:21:42.039 "ddgst": false, 00:21:42.039 "multipath": "disable", 00:21:42.039 "allow_unrecognized_csi": false, 00:21:42.039 "method": "bdev_nvme_attach_controller", 00:21:42.039 "req_id": 1 00:21:42.039 } 00:21:42.039 Got JSON-RPC error response 00:21:42.039 response: 00:21:42.039 { 00:21:42.039 "code": -114, 00:21:42.039 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:42.039 } 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.039 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.039 request: 00:21:42.039 { 00:21:42.039 "name": "NVMe0", 00:21:42.039 "trtype": "tcp", 00:21:42.039 "traddr": "10.0.0.2", 00:21:42.039 "adrfam": "ipv4", 00:21:42.039 "trsvcid": "4420", 00:21:42.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.039 "hostaddr": "10.0.0.1", 00:21:42.039 "prchk_reftag": false, 00:21:42.039 "prchk_guard": false, 00:21:42.039 "hdgst": false, 00:21:42.039 "ddgst": false, 00:21:42.039 "multipath": "failover", 00:21:42.039 "allow_unrecognized_csi": false, 00:21:42.039 "method": "bdev_nvme_attach_controller", 00:21:42.039 "req_id": 1 00:21:42.039 } 00:21:42.039 Got JSON-RPC error response 00:21:42.040 response: 00:21:42.040 { 00:21:42.040 "code": -114, 00:21:42.040 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:42.040 } 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 NVMe0n1 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.040 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.299 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:42.299 15:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.235 { 00:21:43.235 "results": [ 00:21:43.235 { 00:21:43.235 "job": "NVMe0n1", 00:21:43.235 "core_mask": "0x1", 00:21:43.235 "workload": "write", 00:21:43.235 "status": "finished", 00:21:43.235 "queue_depth": 128, 00:21:43.235 "io_size": 4096, 00:21:43.235 "runtime": 1.00806, 00:21:43.235 "iops": 24136.460131341388, 00:21:43.235 "mibps": 94.2830473880523, 00:21:43.235 "io_failed": 0, 00:21:43.235 "io_timeout": 0, 00:21:43.235 "avg_latency_us": 5296.878020489159, 00:21:43.235 "min_latency_us": 3162.824347826087, 00:21:43.235 "max_latency_us": 12879.248695652173 00:21:43.235 } 00:21:43.235 ], 00:21:43.235 "core_count": 1 00:21:43.235 } 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2232240 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2232240 ']' 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2232240 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.235 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2232240 00:21:43.494 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2232240' 00:21:43.495 killing process with pid 2232240 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2232240 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2232240 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:43.495 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.495 [2024-11-20 15:30:45.227901] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:43.495 [2024-11-20 15:30:45.227960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232240 ] 00:21:43.495 [2024-11-20 15:30:45.305943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.495 [2024-11-20 15:30:45.348981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.495 [2024-11-20 15:30:45.926939] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 9782c0ba-d4bf-4c5f-bee7-87526870537e already exists 00:21:43.495 [2024-11-20 15:30:45.926972] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:9782c0ba-d4bf-4c5f-bee7-87526870537e alias for bdev NVMe1n1 00:21:43.495 [2024-11-20 15:30:45.926980] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:43.495 Running I/O for 1 seconds... 00:21:43.495 24076.00 IOPS, 94.05 MiB/s 00:21:43.495 Latency(us) 00:21:43.495 [2024-11-20T14:30:47.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.495 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:43.495 NVMe0n1 : 1.01 24136.46 94.28 0.00 0.00 5296.88 3162.82 12879.25 00:21:43.495 [2024-11-20T14:30:47.403Z] =================================================================================================================== 00:21:43.495 [2024-11-20T14:30:47.403Z] Total : 24136.46 94.28 0.00 0.00 5296.88 3162.82 12879.25 00:21:43.495 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.495 00:21:43.495 Latency(us) 00:21:43.495 [2024-11-20T14:30:47.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.495 [2024-11-20T14:30:47.403Z] =================================================================================================================== 00:21:43.495 [2024-11-20T14:30:47.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.495 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.495 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.495 rmmod nvme_tcp 00:21:43.495 rmmod nvme_fabrics 00:21:43.495 rmmod nvme_keyring 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2232124 ']' 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2232124 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2232124 ']' 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2232124 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2232124 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2232124' 00:21:43.754 killing process with pid 2232124 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2232124 00:21:43.754 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2232124 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.014 15:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.916 00:21:45.916 real 0m11.169s 00:21:45.916 user 0m12.181s 00:21:45.916 sys 0m5.249s 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.916 ************************************ 00:21:45.916 END TEST nvmf_multicontroller 00:21:45.916 ************************************ 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.916 15:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.916 ************************************ 00:21:45.916 START TEST nvmf_aer 00:21:45.916 ************************************ 00:21:45.917 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:46.176 * Looking for test storage... 00:21:46.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.176 --rc genhtml_branch_coverage=1 00:21:46.176 --rc genhtml_function_coverage=1 00:21:46.176 --rc genhtml_legend=1 00:21:46.176 --rc geninfo_all_blocks=1 00:21:46.176 --rc geninfo_unexecuted_blocks=1 00:21:46.176 00:21:46.176 ' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.176 --rc genhtml_branch_coverage=1 00:21:46.176 --rc genhtml_function_coverage=1 00:21:46.176 --rc genhtml_legend=1 00:21:46.176 --rc geninfo_all_blocks=1 00:21:46.176 --rc geninfo_unexecuted_blocks=1 00:21:46.176 00:21:46.176 ' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.176 --rc genhtml_branch_coverage=1 00:21:46.176 --rc genhtml_function_coverage=1 00:21:46.176 --rc genhtml_legend=1 00:21:46.176 --rc geninfo_all_blocks=1 00:21:46.176 --rc geninfo_unexecuted_blocks=1 00:21:46.176 00:21:46.176 ' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.176 --rc genhtml_branch_coverage=1 00:21:46.176 --rc genhtml_function_coverage=1 00:21:46.176 --rc genhtml_legend=1 00:21:46.176 --rc geninfo_all_blocks=1 00:21:46.176 --rc geninfo_unexecuted_blocks=1 00:21:46.176 00:21:46.176 ' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.176 15:30:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.176 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.176 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.176 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.176 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.176 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.177 15:30:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.756 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.756 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.756 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.757 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.757 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:21:52.757 00:21:52.757 --- 10.0.0.2 ping statistics --- 00:21:52.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.757 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:52.757 00:21:52.757 --- 10.0.0.1 ping statistics --- 00:21:52.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.757 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2236131 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2236131 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2236131 ']' 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.757 15:30:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.757 [2024-11-20 15:30:55.998292] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:21:52.757 [2024-11-20 15:30:55.998339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.757 [2024-11-20 15:30:56.077104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.757 [2024-11-20 15:30:56.119855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.757 [2024-11-20 15:30:56.119892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.757 [2024-11-20 15:30:56.119899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.757 [2024-11-20 15:30:56.119906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.757 [2024-11-20 15:30:56.119911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.757 [2024-11-20 15:30:56.121408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.757 [2024-11-20 15:30:56.121520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.757 [2024-11-20 15:30:56.121627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.757 [2024-11-20 15:30:56.121628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 [2024-11-20 15:30:56.257731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 Malloc0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 [2024-11-20 15:30:56.320424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 [ 00:21:52.758 { 00:21:52.758 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.758 "subtype": "Discovery", 00:21:52.758 "listen_addresses": [], 00:21:52.758 "allow_any_host": true, 00:21:52.758 "hosts": [] 00:21:52.758 }, 00:21:52.758 { 00:21:52.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.758 "subtype": "NVMe", 00:21:52.758 "listen_addresses": [ 00:21:52.758 { 00:21:52.758 "trtype": "TCP", 00:21:52.758 "adrfam": "IPv4", 00:21:52.758 "traddr": "10.0.0.2", 00:21:52.758 "trsvcid": "4420" 00:21:52.758 } 00:21:52.758 ], 00:21:52.758 "allow_any_host": true, 00:21:52.758 "hosts": [], 00:21:52.758 "serial_number": "SPDK00000000000001", 00:21:52.758 "model_number": "SPDK bdev Controller", 00:21:52.758 "max_namespaces": 2, 00:21:52.758 "min_cntlid": 1, 00:21:52.758 "max_cntlid": 65519, 00:21:52.758 "namespaces": [ 00:21:52.758 { 00:21:52.758 "nsid": 1, 00:21:52.758 "bdev_name": "Malloc0", 00:21:52.758 "name": "Malloc0", 00:21:52.758 "nguid": "64B88B809F3B4223979BEF10D7848DB4", 00:21:52.758 "uuid": "64b88b80-9f3b-4223-979b-ef10d7848db4" 00:21:52.758 } 00:21:52.758 ] 00:21:52.758 } 00:21:52.758 ] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2236164 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 Malloc1 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.758 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.758 Asynchronous Event Request test 00:21:52.758 Attaching to 10.0.0.2 00:21:52.758 Attached to 10.0.0.2 00:21:52.758 Registering asynchronous event callbacks... 00:21:52.758 Starting namespace attribute notice tests for all controllers... 00:21:52.758 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:52.758 aer_cb - Changed Namespace 00:21:52.758 Cleaning up... 00:21:52.758 [ 00:21:52.758 { 00:21:52.758 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.758 "subtype": "Discovery", 00:21:52.758 "listen_addresses": [], 00:21:52.759 "allow_any_host": true, 00:21:52.759 "hosts": [] 00:21:52.759 }, 00:21:52.759 { 00:21:52.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.759 "subtype": "NVMe", 00:21:52.759 "listen_addresses": [ 00:21:52.759 { 00:21:52.759 "trtype": "TCP", 00:21:52.759 "adrfam": "IPv4", 00:21:52.759 "traddr": "10.0.0.2", 00:21:52.759 "trsvcid": "4420" 00:21:52.759 } 00:21:52.759 ], 00:21:52.759 "allow_any_host": true, 00:21:52.759 "hosts": [], 00:21:52.759 "serial_number": "SPDK00000000000001", 00:21:52.759 "model_number": "SPDK bdev Controller", 00:21:52.759 "max_namespaces": 2, 00:21:52.759 "min_cntlid": 1, 00:21:52.759 "max_cntlid": 65519, 00:21:52.759 "namespaces": [ 00:21:52.759 { 00:21:52.759 "nsid": 1, 00:21:52.759 "bdev_name": "Malloc0", 00:21:52.759 "name": "Malloc0", 00:21:52.759 "nguid": "64B88B809F3B4223979BEF10D7848DB4", 00:21:52.759 "uuid": "64b88b80-9f3b-4223-979b-ef10d7848db4" 00:21:52.759 }, 00:21:52.759 { 00:21:52.759 "nsid": 2, 00:21:52.759 "bdev_name": "Malloc1", 00:21:52.759 "name": "Malloc1", 00:21:52.759 "nguid": "EA5AE01200FB4CE999E9FA2240BDBDBA", 00:21:52.759 "uuid": "ea5ae012-00fb-4ce9-99e9-fa2240bdbdba" 00:21:52.759 } 00:21:52.759 ] 00:21:52.759 } 00:21:52.759 ] 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2236164 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.759 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.017 rmmod nvme_tcp 00:21:53.017 rmmod nvme_fabrics 00:21:53.017 rmmod nvme_keyring 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2236131 ']' 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2236131 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2236131 ']' 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2236131 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236131 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236131' 00:21:53.017 killing process with pid 2236131 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2236131 00:21:53.017 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2236131 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.276 15:30:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.180 00:21:55.180 real 0m9.229s 00:21:55.180 user 0m5.113s 00:21:55.180 sys 0m4.866s 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.180 ************************************ 00:21:55.180 END TEST nvmf_aer 00:21:55.180 ************************************ 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.180 15:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.440 ************************************ 00:21:55.440 START TEST nvmf_async_init 00:21:55.440 ************************************ 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.440 * Looking for test storage... 00:21:55.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:55.440 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:55.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.441 --rc genhtml_branch_coverage=1 00:21:55.441 --rc genhtml_function_coverage=1 00:21:55.441 --rc genhtml_legend=1 00:21:55.441 --rc geninfo_all_blocks=1 00:21:55.441 --rc geninfo_unexecuted_blocks=1 00:21:55.441 00:21:55.441 ' 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:55.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.441 --rc genhtml_branch_coverage=1 00:21:55.441 --rc genhtml_function_coverage=1 00:21:55.441 --rc genhtml_legend=1 00:21:55.441 --rc geninfo_all_blocks=1 00:21:55.441 --rc geninfo_unexecuted_blocks=1 00:21:55.441 00:21:55.441 ' 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:55.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.441 --rc genhtml_branch_coverage=1 00:21:55.441 --rc genhtml_function_coverage=1 00:21:55.441 --rc genhtml_legend=1 00:21:55.441 --rc geninfo_all_blocks=1 00:21:55.441 --rc geninfo_unexecuted_blocks=1 00:21:55.441 00:21:55.441 ' 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:55.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.441 --rc genhtml_branch_coverage=1 00:21:55.441 --rc genhtml_function_coverage=1 00:21:55.441 --rc genhtml_legend=1 00:21:55.441 --rc geninfo_all_blocks=1 00:21:55.441 --rc geninfo_unexecuted_blocks=1 00:21:55.441 00:21:55.441 ' 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.441 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.442 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=834095be78a84a9a86bfeade2fde28b8 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.443 15:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.011 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.012 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.012 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.012 15:31:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:22:02.012 00:22:02.012 --- 10.0.0.2 ping statistics --- 00:22:02.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.012 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:02.012 00:22:02.012 --- 10.0.0.1 ping statistics --- 00:22:02.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.012 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2239778 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2239778 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2239778 ']' 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 [2024-11-20 15:31:05.339280] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:02.012 [2024-11-20 15:31:05.339324] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.012 [2024-11-20 15:31:05.418841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.012 [2024-11-20 15:31:05.459891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.012 [2024-11-20 15:31:05.459929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.012 [2024-11-20 15:31:05.459936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.012 [2024-11-20 15:31:05.459942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.012 [2024-11-20 15:31:05.459952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.012 [2024-11-20 15:31:05.460511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 [2024-11-20 15:31:05.595322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 null0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 834095be78a84a9a86bfeade2fde28b8 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 [2024-11-20 15:31:05.647582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 nvme0n1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.012 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.012 [ 00:22:02.012 { 00:22:02.012 "name": "nvme0n1", 00:22:02.012 "aliases": [ 00:22:02.012 "834095be-78a8-4a9a-86bf-eade2fde28b8" 00:22:02.012 ], 00:22:02.012 "product_name": "NVMe disk", 00:22:02.012 "block_size": 512, 00:22:02.012 "num_blocks": 2097152, 00:22:02.012 "uuid": "834095be-78a8-4a9a-86bf-eade2fde28b8", 00:22:02.012 "numa_id": 1, 00:22:02.012 "assigned_rate_limits": { 00:22:02.012 "rw_ios_per_sec": 0, 00:22:02.012 "rw_mbytes_per_sec": 0, 00:22:02.012 "r_mbytes_per_sec": 0, 00:22:02.012 "w_mbytes_per_sec": 0 00:22:02.012 }, 00:22:02.012 "claimed": false, 00:22:02.012 "zoned": false, 00:22:02.012 "supported_io_types": { 00:22:02.013 "read": true, 00:22:02.013 "write": true, 00:22:02.013 "unmap": false, 00:22:02.013 "flush": true, 00:22:02.013 "reset": true, 00:22:02.013 "nvme_admin": true, 00:22:02.013 "nvme_io": true, 00:22:02.013 "nvme_io_md": false, 00:22:02.013 "write_zeroes": true, 00:22:02.013 "zcopy": false, 00:22:02.013 "get_zone_info": false, 00:22:02.013 "zone_management": false, 00:22:02.013 "zone_append": false, 00:22:02.013 "compare": true, 00:22:02.013 "compare_and_write": true, 00:22:02.013 "abort": true, 00:22:02.013 "seek_hole": false, 00:22:02.013 "seek_data": false, 00:22:02.013 "copy": true, 00:22:02.013 "nvme_iov_md": false 00:22:02.013 }, 00:22:02.013 "memory_domains": [ 00:22:02.013 { 00:22:02.013 "dma_device_id": "system", 00:22:02.013 "dma_device_type": 1 00:22:02.013 } 00:22:02.013 ], 00:22:02.013 "driver_specific": { 00:22:02.013 "nvme": [ 00:22:02.013 { 00:22:02.013 "trid": { 00:22:02.013 "trtype": "TCP", 00:22:02.013 "adrfam": "IPv4", 00:22:02.013 "traddr": "10.0.0.2", 00:22:02.013 "trsvcid": "4420", 00:22:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.013 }, 00:22:02.013 "ctrlr_data": { 00:22:02.013 "cntlid": 1, 00:22:02.013 "vendor_id": "0x8086", 00:22:02.013 "model_number": "SPDK bdev Controller", 00:22:02.013 "serial_number": "00000000000000000000", 00:22:02.013 "firmware_revision": "25.01", 00:22:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.013 "oacs": { 00:22:02.013 "security": 0, 00:22:02.013 "format": 0, 00:22:02.013 "firmware": 0, 00:22:02.013 "ns_manage": 0 00:22:02.013 }, 00:22:02.013 "multi_ctrlr": true, 00:22:02.013 "ana_reporting": false 00:22:02.013 }, 00:22:02.013 "vs": { 00:22:02.013 "nvme_version": "1.3" 00:22:02.013 }, 00:22:02.013 "ns_data": { 00:22:02.013 "id": 1, 00:22:02.013 "can_share": true 00:22:02.013 } 00:22:02.013 } 00:22:02.013 ], 00:22:02.013 "mp_policy": "active_passive" 00:22:02.013 } 00:22:02.013 } 00:22:02.013 ] 00:22:02.013 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.013 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:02.013 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.013 15:31:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.013 [2024-11-20 15:31:05.912136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:02.013 [2024-11-20 15:31:05.912193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba7220 (9): Bad file descriptor 00:22:02.271 [2024-11-20 15:31:06.044025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 [ 00:22:02.271 { 00:22:02.271 "name": "nvme0n1", 00:22:02.271 "aliases": [ 00:22:02.271 "834095be-78a8-4a9a-86bf-eade2fde28b8" 00:22:02.271 ], 00:22:02.271 "product_name": "NVMe disk", 00:22:02.271 "block_size": 512, 00:22:02.271 "num_blocks": 2097152, 00:22:02.271 "uuid": "834095be-78a8-4a9a-86bf-eade2fde28b8", 00:22:02.271 "numa_id": 1, 00:22:02.271 "assigned_rate_limits": { 00:22:02.271 "rw_ios_per_sec": 0, 00:22:02.271 "rw_mbytes_per_sec": 0, 00:22:02.271 "r_mbytes_per_sec": 0, 00:22:02.271 "w_mbytes_per_sec": 0 00:22:02.271 }, 00:22:02.271 "claimed": false, 00:22:02.271 "zoned": false, 00:22:02.271 "supported_io_types": { 00:22:02.271 "read": true, 00:22:02.271 "write": true, 00:22:02.271 "unmap": false, 00:22:02.271 "flush": true, 00:22:02.271 "reset": true, 00:22:02.271 "nvme_admin": true, 00:22:02.271 "nvme_io": true, 00:22:02.271 "nvme_io_md": false, 00:22:02.271 "write_zeroes": true, 00:22:02.271 "zcopy": false, 00:22:02.271 "get_zone_info": false, 00:22:02.271 "zone_management": false, 00:22:02.271 "zone_append": false, 00:22:02.271 "compare": true, 00:22:02.271 "compare_and_write": true, 00:22:02.271 "abort": true, 00:22:02.271 "seek_hole": false, 00:22:02.271 "seek_data": false, 00:22:02.271 "copy": true, 00:22:02.271 "nvme_iov_md": false 00:22:02.271 }, 00:22:02.271 "memory_domains": [ 00:22:02.271 { 00:22:02.271 "dma_device_id": "system", 00:22:02.271 "dma_device_type": 1 00:22:02.271 } 00:22:02.271 ], 00:22:02.271 "driver_specific": { 00:22:02.271 "nvme": [ 00:22:02.271 { 00:22:02.271 "trid": { 00:22:02.271 "trtype": "TCP", 00:22:02.271 "adrfam": "IPv4", 00:22:02.271 "traddr": "10.0.0.2", 00:22:02.271 "trsvcid": "4420", 00:22:02.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.271 }, 00:22:02.271 "ctrlr_data": { 00:22:02.271 "cntlid": 2, 00:22:02.271 "vendor_id": "0x8086", 00:22:02.271 "model_number": "SPDK bdev Controller", 00:22:02.271 "serial_number": "00000000000000000000", 00:22:02.271 "firmware_revision": "25.01", 00:22:02.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.271 "oacs": { 00:22:02.271 "security": 0, 00:22:02.271 "format": 0, 00:22:02.271 "firmware": 0, 00:22:02.271 "ns_manage": 0 00:22:02.271 }, 00:22:02.271 "multi_ctrlr": true, 00:22:02.271 "ana_reporting": false 00:22:02.271 }, 00:22:02.271 "vs": { 00:22:02.271 "nvme_version": "1.3" 00:22:02.271 }, 00:22:02.271 "ns_data": { 00:22:02.271 "id": 1, 00:22:02.271 "can_share": true 00:22:02.271 } 00:22:02.271 } 00:22:02.271 ], 00:22:02.271 "mp_policy": "active_passive" 00:22:02.271 } 00:22:02.271 } 00:22:02.271 ] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ULNYhwlWxE 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ULNYhwlWxE 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ULNYhwlWxE 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 [2024-11-20 15:31:06.116750] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.271 [2024-11-20 15:31:06.116844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.271 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 [2024-11-20 15:31:06.136814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.530 nvme0n1 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.530 [ 00:22:02.530 { 00:22:02.530 "name": "nvme0n1", 00:22:02.530 "aliases": [ 00:22:02.530 "834095be-78a8-4a9a-86bf-eade2fde28b8" 00:22:02.530 ], 00:22:02.530 "product_name": "NVMe disk", 00:22:02.530 "block_size": 512, 00:22:02.530 "num_blocks": 2097152, 00:22:02.530 "uuid": "834095be-78a8-4a9a-86bf-eade2fde28b8", 00:22:02.530 "numa_id": 1, 00:22:02.530 "assigned_rate_limits": { 00:22:02.530 "rw_ios_per_sec": 0, 00:22:02.530 "rw_mbytes_per_sec": 0, 00:22:02.530 "r_mbytes_per_sec": 0, 00:22:02.530 "w_mbytes_per_sec": 0 00:22:02.530 }, 00:22:02.530 "claimed": false, 00:22:02.530 "zoned": false, 00:22:02.530 "supported_io_types": { 00:22:02.530 "read": true, 00:22:02.530 "write": true, 00:22:02.530 "unmap": false, 00:22:02.530 "flush": true, 00:22:02.530 "reset": true, 00:22:02.530 "nvme_admin": true, 00:22:02.530 "nvme_io": true, 00:22:02.530 "nvme_io_md": false, 00:22:02.530 "write_zeroes": true, 00:22:02.530 "zcopy": false, 00:22:02.530 "get_zone_info": false, 00:22:02.530 "zone_management": false, 00:22:02.530 "zone_append": false, 00:22:02.530 "compare": true, 00:22:02.530 "compare_and_write": true, 00:22:02.530 "abort": true, 00:22:02.530 "seek_hole": false, 00:22:02.530 "seek_data": false, 00:22:02.530 "copy": true, 00:22:02.530 "nvme_iov_md": false 00:22:02.530 }, 00:22:02.530 "memory_domains": [ 00:22:02.530 { 00:22:02.530 "dma_device_id": "system", 00:22:02.530 "dma_device_type": 1 00:22:02.530 } 00:22:02.530 ], 00:22:02.530 "driver_specific": { 00:22:02.530 "nvme": [ 00:22:02.530 { 00:22:02.530 "trid": { 00:22:02.530 "trtype": "TCP", 00:22:02.530 "adrfam": "IPv4", 00:22:02.530 "traddr": "10.0.0.2", 00:22:02.530 "trsvcid": "4421", 00:22:02.530 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.530 }, 00:22:02.530 "ctrlr_data": { 00:22:02.530 "cntlid": 3, 00:22:02.530 "vendor_id": "0x8086", 00:22:02.530 "model_number": "SPDK bdev Controller", 00:22:02.530 "serial_number": "00000000000000000000", 00:22:02.530 "firmware_revision": "25.01", 00:22:02.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.530 "oacs": { 00:22:02.530 "security": 0, 00:22:02.530 "format": 0, 00:22:02.530 "firmware": 0, 00:22:02.530 "ns_manage": 0 00:22:02.530 }, 00:22:02.530 "multi_ctrlr": true, 00:22:02.530 "ana_reporting": false 00:22:02.530 }, 00:22:02.530 "vs": { 00:22:02.530 "nvme_version": "1.3" 00:22:02.530 }, 00:22:02.530 "ns_data": { 00:22:02.530 "id": 1, 00:22:02.530 "can_share": true 00:22:02.530 } 00:22:02.530 } 00:22:02.530 ], 00:22:02.530 "mp_policy": "active_passive" 00:22:02.530 } 00:22:02.530 } 00:22:02.530 ] 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.530 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ULNYhwlWxE 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.531 rmmod nvme_tcp 00:22:02.531 rmmod nvme_fabrics 00:22:02.531 rmmod nvme_keyring 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2239778 ']' 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2239778 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2239778 ']' 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2239778 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2239778 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2239778' 00:22:02.531 killing process with pid 2239778 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2239778 00:22:02.531 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2239778 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.790 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.791 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.791 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.791 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.791 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.791 15:31:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.693 15:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.693 00:22:04.693 real 0m9.472s 00:22:04.693 user 0m3.034s 00:22:04.693 sys 0m4.877s 00:22:04.693 15:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.693 15:31:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.693 ************************************ 00:22:04.693 END TEST nvmf_async_init 00:22:04.693 ************************************ 00:22:04.952 15:31:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.952 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.952 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.952 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.952 ************************************ 00:22:04.952 START TEST dma 00:22:04.952 ************************************ 00:22:04.952 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.952 * Looking for test storage... 00:22:04.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.953 --rc genhtml_branch_coverage=1 00:22:04.953 --rc genhtml_function_coverage=1 00:22:04.953 --rc genhtml_legend=1 00:22:04.953 --rc geninfo_all_blocks=1 00:22:04.953 --rc geninfo_unexecuted_blocks=1 00:22:04.953 00:22:04.953 ' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.953 --rc genhtml_branch_coverage=1 00:22:04.953 --rc genhtml_function_coverage=1 00:22:04.953 --rc genhtml_legend=1 00:22:04.953 --rc geninfo_all_blocks=1 00:22:04.953 --rc geninfo_unexecuted_blocks=1 00:22:04.953 00:22:04.953 ' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.953 --rc genhtml_branch_coverage=1 00:22:04.953 --rc genhtml_function_coverage=1 00:22:04.953 --rc genhtml_legend=1 00:22:04.953 --rc geninfo_all_blocks=1 00:22:04.953 --rc geninfo_unexecuted_blocks=1 00:22:04.953 00:22:04.953 ' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:04.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.953 --rc genhtml_branch_coverage=1 00:22:04.953 --rc genhtml_function_coverage=1 00:22:04.953 --rc genhtml_legend=1 00:22:04.953 --rc geninfo_all_blocks=1 00:22:04.953 --rc geninfo_unexecuted_blocks=1 00:22:04.953 00:22:04.953 ' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.953 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:05.212 00:22:05.212 real 0m0.213s 00:22:05.212 user 0m0.130s 00:22:05.212 sys 0m0.098s 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:05.212 ************************************ 00:22:05.212 END TEST dma 00:22:05.212 ************************************ 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.212 ************************************ 00:22:05.212 START TEST nvmf_identify 00:22:05.212 ************************************ 00:22:05.212 15:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:05.212 * Looking for test storage... 00:22:05.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.212 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.213 --rc genhtml_branch_coverage=1 00:22:05.213 --rc genhtml_function_coverage=1 00:22:05.213 --rc genhtml_legend=1 00:22:05.213 --rc geninfo_all_blocks=1 00:22:05.213 --rc geninfo_unexecuted_blocks=1 00:22:05.213 00:22:05.213 ' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.213 --rc genhtml_branch_coverage=1 00:22:05.213 --rc genhtml_function_coverage=1 00:22:05.213 --rc genhtml_legend=1 00:22:05.213 --rc geninfo_all_blocks=1 00:22:05.213 --rc geninfo_unexecuted_blocks=1 00:22:05.213 00:22:05.213 ' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.213 --rc genhtml_branch_coverage=1 00:22:05.213 --rc genhtml_function_coverage=1 00:22:05.213 --rc genhtml_legend=1 00:22:05.213 --rc geninfo_all_blocks=1 00:22:05.213 --rc geninfo_unexecuted_blocks=1 00:22:05.213 00:22:05.213 ' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.213 --rc genhtml_branch_coverage=1 00:22:05.213 --rc genhtml_function_coverage=1 00:22:05.213 --rc genhtml_legend=1 00:22:05.213 --rc geninfo_all_blocks=1 00:22:05.213 --rc geninfo_unexecuted_blocks=1 00:22:05.213 00:22:05.213 ' 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.213 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.471 15:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.042 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.042 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.042 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.043 15:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:22:12.043 00:22:12.043 --- 10.0.0.2 ping statistics --- 00:22:12.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.043 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:12.043 00:22:12.043 --- 10.0.0.1 ping statistics --- 00:22:12.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.043 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2243511 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2243511 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2243511 ']' 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 [2024-11-20 15:31:15.148190] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:12.043 [2024-11-20 15:31:15.148232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.043 [2024-11-20 15:31:15.225680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.043 [2024-11-20 15:31:15.270280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.043 [2024-11-20 15:31:15.270319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.043 [2024-11-20 15:31:15.270326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.043 [2024-11-20 15:31:15.270333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.043 [2024-11-20 15:31:15.270338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.043 [2024-11-20 15:31:15.271926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.043 [2024-11-20 15:31:15.271977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.043 [2024-11-20 15:31:15.272043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.043 [2024-11-20 15:31:15.272043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 [2024-11-20 15:31:15.381552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 Malloc0 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 [2024-11-20 15:31:15.488588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.043 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 [ 00:22:12.044 { 00:22:12.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.044 "subtype": "Discovery", 00:22:12.044 "listen_addresses": [ 00:22:12.044 { 00:22:12.044 "trtype": "TCP", 00:22:12.044 "adrfam": "IPv4", 00:22:12.044 "traddr": "10.0.0.2", 00:22:12.044 "trsvcid": "4420" 00:22:12.044 } 00:22:12.044 ], 00:22:12.044 "allow_any_host": true, 00:22:12.044 "hosts": [] 00:22:12.044 }, 00:22:12.044 { 00:22:12.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.044 "subtype": "NVMe", 00:22:12.044 "listen_addresses": [ 00:22:12.044 { 00:22:12.044 "trtype": "TCP", 00:22:12.044 "adrfam": "IPv4", 00:22:12.044 "traddr": "10.0.0.2", 00:22:12.044 "trsvcid": "4420" 00:22:12.044 } 00:22:12.044 ], 00:22:12.044 "allow_any_host": true, 00:22:12.044 "hosts": [], 00:22:12.044 "serial_number": "SPDK00000000000001", 00:22:12.044 "model_number": "SPDK bdev Controller", 00:22:12.044 "max_namespaces": 32, 00:22:12.044 "min_cntlid": 1, 00:22:12.044 "max_cntlid": 65519, 00:22:12.044 "namespaces": [ 00:22:12.044 { 00:22:12.044 "nsid": 1, 00:22:12.044 "bdev_name": "Malloc0", 00:22:12.044 "name": "Malloc0", 00:22:12.044 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:12.044 "eui64": "ABCDEF0123456789", 00:22:12.044 "uuid": "60cb7eb3-2b02-40f1-8671-403f11afe29e" 00:22:12.044 } 00:22:12.044 ] 00:22:12.044 } 00:22:12.044 ] 00:22:12.044 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.044 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:12.044 [2024-11-20 15:31:15.541167] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:12.044 [2024-11-20 15:31:15.541200] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243682 ] 00:22:12.044 [2024-11-20 15:31:15.581928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:12.044 [2024-11-20 15:31:15.585978] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.044 [2024-11-20 15:31:15.585986] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.044 [2024-11-20 15:31:15.586000] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.044 [2024-11-20 15:31:15.586010] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.044 [2024-11-20 15:31:15.586614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:12.044 [2024-11-20 15:31:15.586644] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21da690 0 00:22:12.044 [2024-11-20 15:31:15.592961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.044 [2024-11-20 15:31:15.592975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.044 [2024-11-20 15:31:15.592980] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.044 [2024-11-20 15:31:15.592983] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.044 [2024-11-20 15:31:15.593016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.593021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.593025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.044 [2024-11-20 15:31:15.593039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.044 [2024-11-20 15:31:15.593057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.044 [2024-11-20 15:31:15.599957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.044 [2024-11-20 15:31:15.599966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.044 [2024-11-20 15:31:15.599969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.599974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.044 [2024-11-20 15:31:15.599983] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.044 [2024-11-20 15:31:15.599989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:12.044 [2024-11-20 15:31:15.599994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:12.044 [2024-11-20 15:31:15.600007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.044 [2024-11-20 15:31:15.600021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.044 [2024-11-20 15:31:15.600034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.044 [2024-11-20 15:31:15.600177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.044 [2024-11-20 15:31:15.600183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.044 [2024-11-20 15:31:15.600186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.044 [2024-11-20 15:31:15.600194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:12.044 [2024-11-20 15:31:15.600201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:12.044 [2024-11-20 15:31:15.600207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.044 [2024-11-20 15:31:15.600220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.044 [2024-11-20 15:31:15.600230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.044 [2024-11-20 15:31:15.600293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.044 [2024-11-20 15:31:15.600298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.044 [2024-11-20 15:31:15.600302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.044 [2024-11-20 15:31:15.600310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:12.044 [2024-11-20 15:31:15.600317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.044 [2024-11-20 15:31:15.600323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.044 [2024-11-20 15:31:15.600338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.044 [2024-11-20 15:31:15.600347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.044 [2024-11-20 15:31:15.600409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.044 [2024-11-20 15:31:15.600414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.044 [2024-11-20 15:31:15.600417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.044 [2024-11-20 15:31:15.600425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.044 [2024-11-20 15:31:15.600433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.044 [2024-11-20 15:31:15.600446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.044 [2024-11-20 15:31:15.600455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.044 [2024-11-20 15:31:15.600519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.044 [2024-11-20 15:31:15.600525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.044 [2024-11-20 15:31:15.600528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.044 [2024-11-20 15:31:15.600532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.600536] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:12.045 [2024-11-20 15:31:15.600540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:12.045 [2024-11-20 15:31:15.600547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.045 [2024-11-20 15:31:15.600655] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:12.045 [2024-11-20 15:31:15.600659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.045 [2024-11-20 15:31:15.600666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.600678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.045 [2024-11-20 15:31:15.600688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.045 [2024-11-20 15:31:15.600751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.045 [2024-11-20 15:31:15.600757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.045 [2024-11-20 15:31:15.600760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.600767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.045 [2024-11-20 15:31:15.600775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.600790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.045 [2024-11-20 15:31:15.600800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.045 [2024-11-20 15:31:15.600870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.045 [2024-11-20 15:31:15.600875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.045 [2024-11-20 15:31:15.600879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.600886] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.045 [2024-11-20 15:31:15.600890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.600897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:12.045 [2024-11-20 15:31:15.600906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.600914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.600917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.600923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.045 [2024-11-20 15:31:15.600933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.045 [2024-11-20 15:31:15.601037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.045 [2024-11-20 15:31:15.601043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.045 [2024-11-20 15:31:15.601046] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.601050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21da690): datao=0, datal=4096, cccid=0 00:22:12.045 [2024-11-20 15:31:15.601054] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223c100) on tqpair(0x21da690): expected_datao=0, payload_size=4096 00:22:12.045 [2024-11-20 15:31:15.601058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.601071] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.601076] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.045 [2024-11-20 15:31:15.642117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.045 [2024-11-20 15:31:15.642120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.642132] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:12.045 [2024-11-20 15:31:15.642137] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:12.045 [2024-11-20 15:31:15.642141] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:12.045 [2024-11-20 15:31:15.642149] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:12.045 [2024-11-20 15:31:15.642153] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:12.045 [2024-11-20 15:31:15.642160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.642171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.642177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.045 [2024-11-20 15:31:15.642202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.045 [2024-11-20 15:31:15.642266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.045 [2024-11-20 15:31:15.642272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.045 [2024-11-20 15:31:15.642275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.642285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.045 [2024-11-20 15:31:15.642302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.045 [2024-11-20 15:31:15.642319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.045 [2024-11-20 15:31:15.642336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.045 [2024-11-20 15:31:15.642351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.642359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.045 [2024-11-20 15:31:15.642365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21da690) 00:22:12.045 [2024-11-20 15:31:15.642374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.045 [2024-11-20 15:31:15.642387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c100, cid 0, qid 0 00:22:12.045 [2024-11-20 15:31:15.642391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c280, cid 1, qid 0 00:22:12.045 [2024-11-20 15:31:15.642395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c400, cid 2, qid 0 00:22:12.045 [2024-11-20 15:31:15.642400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.045 [2024-11-20 15:31:15.642404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c700, cid 4, qid 0 00:22:12.045 [2024-11-20 15:31:15.642502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.045 [2024-11-20 15:31:15.642508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.045 [2024-11-20 15:31:15.642510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.045 [2024-11-20 15:31:15.642514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c700) on tqpair=0x21da690 00:22:12.045 [2024-11-20 15:31:15.642520] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:12.046 [2024-11-20 15:31:15.642525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:12.046 [2024-11-20 15:31:15.642535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21da690) 00:22:12.046 [2024-11-20 15:31:15.642544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.046 [2024-11-20 15:31:15.642554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c700, cid 4, qid 0 00:22:12.046 [2024-11-20 15:31:15.642629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.046 [2024-11-20 15:31:15.642635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.046 [2024-11-20 15:31:15.642638] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21da690): datao=0, datal=4096, cccid=4 00:22:12.046 [2024-11-20 15:31:15.642645] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223c700) on tqpair(0x21da690): expected_datao=0, payload_size=4096 00:22:12.046 [2024-11-20 15:31:15.642649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642659] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.046 [2024-11-20 15:31:15.642688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.046 [2024-11-20 15:31:15.642691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c700) on tqpair=0x21da690 00:22:12.046 [2024-11-20 15:31:15.642705] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:12.046 [2024-11-20 15:31:15.642725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21da690) 00:22:12.046 [2024-11-20 15:31:15.642734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.046 [2024-11-20 15:31:15.642741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21da690) 00:22:12.046 [2024-11-20 15:31:15.642752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.046 [2024-11-20 15:31:15.642767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c700, cid 4, qid 0 00:22:12.046 [2024-11-20 15:31:15.642772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c880, cid 5, qid 0 00:22:12.046 [2024-11-20 15:31:15.642880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.046 [2024-11-20 15:31:15.642886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.046 [2024-11-20 15:31:15.642889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642892] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21da690): datao=0, datal=1024, cccid=4 00:22:12.046 [2024-11-20 15:31:15.642896] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223c700) on tqpair(0x21da690): expected_datao=0, payload_size=1024 00:22:12.046 [2024-11-20 15:31:15.642900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642905] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.046 [2024-11-20 15:31:15.642919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.046 [2024-11-20 15:31:15.642922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.642925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c880) on tqpair=0x21da690 00:22:12.046 [2024-11-20 15:31:15.686956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.046 [2024-11-20 15:31:15.686965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.046 [2024-11-20 15:31:15.686969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.686972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c700) on tqpair=0x21da690 00:22:12.046 [2024-11-20 15:31:15.686983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.686987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21da690) 00:22:12.046 [2024-11-20 15:31:15.686993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.046 [2024-11-20 15:31:15.687009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c700, cid 4, qid 0 00:22:12.046 [2024-11-20 15:31:15.687175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.046 [2024-11-20 15:31:15.687181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.046 [2024-11-20 15:31:15.687184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687188] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21da690): datao=0, datal=3072, cccid=4 00:22:12.046 [2024-11-20 15:31:15.687191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223c700) on tqpair(0x21da690): expected_datao=0, payload_size=3072 00:22:12.046 [2024-11-20 15:31:15.687196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687216] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.046 [2024-11-20 15:31:15.687260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.046 [2024-11-20 15:31:15.687263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c700) on tqpair=0x21da690 00:22:12.046 [2024-11-20 15:31:15.687274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21da690) 00:22:12.046 [2024-11-20 15:31:15.687283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.046 [2024-11-20 15:31:15.687300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c700, cid 4, qid 0 00:22:12.046 [2024-11-20 15:31:15.687373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.046 [2024-11-20 15:31:15.687379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.046 [2024-11-20 15:31:15.687382] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21da690): datao=0, datal=8, cccid=4 00:22:12.046 [2024-11-20 15:31:15.687390] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223c700) on tqpair(0x21da690): expected_datao=0, payload_size=8 00:22:12.046 [2024-11-20 15:31:15.687393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687399] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.687402] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.729062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.046 [2024-11-20 15:31:15.729072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.046 [2024-11-20 15:31:15.729075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.046 [2024-11-20 15:31:15.729078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c700) on tqpair=0x21da690 00:22:12.046 ===================================================== 00:22:12.046 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:12.046 ===================================================== 00:22:12.046 Controller Capabilities/Features 00:22:12.046 ================================ 00:22:12.046 Vendor ID: 0000 00:22:12.046 Subsystem Vendor ID: 0000 00:22:12.046 Serial Number: .................... 00:22:12.046 Model Number: ........................................ 00:22:12.046 Firmware Version: 25.01 00:22:12.046 Recommended Arb Burst: 0 00:22:12.046 IEEE OUI Identifier: 00 00 00 00:22:12.046 Multi-path I/O 00:22:12.046 May have multiple subsystem ports: No 00:22:12.046 May have multiple controllers: No 00:22:12.046 Associated with SR-IOV VF: No 00:22:12.046 Max Data Transfer Size: 131072 00:22:12.046 Max Number of Namespaces: 0 00:22:12.046 Max Number of I/O Queues: 1024 00:22:12.046 NVMe Specification Version (VS): 1.3 00:22:12.046 NVMe Specification Version (Identify): 1.3 00:22:12.046 Maximum Queue Entries: 128 00:22:12.046 Contiguous Queues Required: Yes 00:22:12.046 Arbitration Mechanisms Supported 00:22:12.046 Weighted Round Robin: Not Supported 00:22:12.046 Vendor Specific: Not Supported 00:22:12.046 Reset Timeout: 15000 ms 00:22:12.046 Doorbell Stride: 4 bytes 00:22:12.046 NVM Subsystem Reset: Not Supported 00:22:12.046 Command Sets Supported 00:22:12.046 NVM Command Set: Supported 00:22:12.046 Boot Partition: Not Supported 00:22:12.046 Memory Page Size Minimum: 4096 bytes 00:22:12.046 Memory Page Size Maximum: 4096 bytes 00:22:12.046 Persistent Memory Region: Not Supported 00:22:12.046 Optional Asynchronous Events Supported 00:22:12.046 Namespace Attribute Notices: Not Supported 00:22:12.046 Firmware Activation Notices: Not Supported 00:22:12.046 ANA Change Notices: Not Supported 00:22:12.046 PLE Aggregate Log Change Notices: Not Supported 00:22:12.046 LBA Status Info Alert Notices: Not Supported 00:22:12.046 EGE Aggregate Log Change Notices: Not Supported 00:22:12.046 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.046 Zone Descriptor Change Notices: Not Supported 00:22:12.046 Discovery Log Change Notices: Supported 00:22:12.046 Controller Attributes 00:22:12.046 128-bit Host Identifier: Not Supported 00:22:12.046 Non-Operational Permissive Mode: Not Supported 00:22:12.046 NVM Sets: Not Supported 00:22:12.046 Read Recovery Levels: Not Supported 00:22:12.046 Endurance Groups: Not Supported 00:22:12.046 Predictable Latency Mode: Not Supported 00:22:12.046 Traffic Based Keep ALive: Not Supported 00:22:12.046 Namespace Granularity: Not Supported 00:22:12.047 SQ Associations: Not Supported 00:22:12.047 UUID List: Not Supported 00:22:12.047 Multi-Domain Subsystem: Not Supported 00:22:12.047 Fixed Capacity Management: Not Supported 00:22:12.047 Variable Capacity Management: Not Supported 00:22:12.047 Delete Endurance Group: Not Supported 00:22:12.047 Delete NVM Set: Not Supported 00:22:12.047 Extended LBA Formats Supported: Not Supported 00:22:12.047 Flexible Data Placement Supported: Not Supported 00:22:12.047 00:22:12.047 Controller Memory Buffer Support 00:22:12.047 ================================ 00:22:12.047 Supported: No 00:22:12.047 00:22:12.047 Persistent Memory Region Support 00:22:12.047 ================================ 00:22:12.047 Supported: No 00:22:12.047 00:22:12.047 Admin Command Set Attributes 00:22:12.047 ============================ 00:22:12.047 Security Send/Receive: Not Supported 00:22:12.047 Format NVM: Not Supported 00:22:12.047 Firmware Activate/Download: Not Supported 00:22:12.047 Namespace Management: Not Supported 00:22:12.047 Device Self-Test: Not Supported 00:22:12.047 Directives: Not Supported 00:22:12.047 NVMe-MI: Not Supported 00:22:12.047 Virtualization Management: Not Supported 00:22:12.047 Doorbell Buffer Config: Not Supported 00:22:12.047 Get LBA Status Capability: Not Supported 00:22:12.047 Command & Feature Lockdown Capability: Not Supported 00:22:12.047 Abort Command Limit: 1 00:22:12.047 Async Event Request Limit: 4 00:22:12.047 Number of Firmware Slots: N/A 00:22:12.047 Firmware Slot 1 Read-Only: N/A 00:22:12.047 Firmware Activation Without Reset: N/A 00:22:12.047 Multiple Update Detection Support: N/A 00:22:12.047 Firmware Update Granularity: No Information Provided 00:22:12.047 Per-Namespace SMART Log: No 00:22:12.047 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.047 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:12.047 Command Effects Log Page: Not Supported 00:22:12.047 Get Log Page Extended Data: Supported 00:22:12.047 Telemetry Log Pages: Not Supported 00:22:12.047 Persistent Event Log Pages: Not Supported 00:22:12.047 Supported Log Pages Log Page: May Support 00:22:12.047 Commands Supported & Effects Log Page: Not Supported 00:22:12.047 Feature Identifiers & Effects Log Page:May Support 00:22:12.047 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.047 Data Area 4 for Telemetry Log: Not Supported 00:22:12.047 Error Log Page Entries Supported: 128 00:22:12.047 Keep Alive: Not Supported 00:22:12.047 00:22:12.047 NVM Command Set Attributes 00:22:12.047 ========================== 00:22:12.047 Submission Queue Entry Size 00:22:12.047 Max: 1 00:22:12.047 Min: 1 00:22:12.047 Completion Queue Entry Size 00:22:12.047 Max: 1 00:22:12.047 Min: 1 00:22:12.047 Number of Namespaces: 0 00:22:12.047 Compare Command: Not Supported 00:22:12.047 Write Uncorrectable Command: Not Supported 00:22:12.047 Dataset Management Command: Not Supported 00:22:12.047 Write Zeroes Command: Not Supported 00:22:12.047 Set Features Save Field: Not Supported 00:22:12.047 Reservations: Not Supported 00:22:12.047 Timestamp: Not Supported 00:22:12.047 Copy: Not Supported 00:22:12.047 Volatile Write Cache: Not Present 00:22:12.047 Atomic Write Unit (Normal): 1 00:22:12.047 Atomic Write Unit (PFail): 1 00:22:12.047 Atomic Compare & Write Unit: 1 00:22:12.047 Fused Compare & Write: Supported 00:22:12.047 Scatter-Gather List 00:22:12.047 SGL Command Set: Supported 00:22:12.047 SGL Keyed: Supported 00:22:12.047 SGL Bit Bucket Descriptor: Not Supported 00:22:12.047 SGL Metadata Pointer: Not Supported 00:22:12.047 Oversized SGL: Not Supported 00:22:12.047 SGL Metadata Address: Not Supported 00:22:12.047 SGL Offset: Supported 00:22:12.047 Transport SGL Data Block: Not Supported 00:22:12.047 Replay Protected Memory Block: Not Supported 00:22:12.047 00:22:12.047 Firmware Slot Information 00:22:12.047 ========================= 00:22:12.047 Active slot: 0 00:22:12.047 00:22:12.047 00:22:12.047 Error Log 00:22:12.047 ========= 00:22:12.047 00:22:12.047 Active Namespaces 00:22:12.047 ================= 00:22:12.047 Discovery Log Page 00:22:12.047 ================== 00:22:12.047 Generation Counter: 2 00:22:12.047 Number of Records: 2 00:22:12.047 Record Format: 0 00:22:12.047 00:22:12.047 Discovery Log Entry 0 00:22:12.047 ---------------------- 00:22:12.047 Transport Type: 3 (TCP) 00:22:12.047 Address Family: 1 (IPv4) 00:22:12.047 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:12.047 Entry Flags: 00:22:12.047 Duplicate Returned Information: 1 00:22:12.047 Explicit Persistent Connection Support for Discovery: 1 00:22:12.047 Transport Requirements: 00:22:12.047 Secure Channel: Not Required 00:22:12.047 Port ID: 0 (0x0000) 00:22:12.047 Controller ID: 65535 (0xffff) 00:22:12.047 Admin Max SQ Size: 128 00:22:12.047 Transport Service Identifier: 4420 00:22:12.047 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:12.047 Transport Address: 10.0.0.2 00:22:12.047 Discovery Log Entry 1 00:22:12.047 ---------------------- 00:22:12.047 Transport Type: 3 (TCP) 00:22:12.047 Address Family: 1 (IPv4) 00:22:12.047 Subsystem Type: 2 (NVM Subsystem) 00:22:12.047 Entry Flags: 00:22:12.047 Duplicate Returned Information: 0 00:22:12.047 Explicit Persistent Connection Support for Discovery: 0 00:22:12.047 Transport Requirements: 00:22:12.047 Secure Channel: Not Required 00:22:12.047 Port ID: 0 (0x0000) 00:22:12.047 Controller ID: 65535 (0xffff) 00:22:12.047 Admin Max SQ Size: 128 00:22:12.047 Transport Service Identifier: 4420 00:22:12.047 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:12.047 Transport Address: 10.0.0.2 [2024-11-20 15:31:15.729161] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:12.047 [2024-11-20 15:31:15.729172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c100) on tqpair=0x21da690 00:22:12.047 [2024-11-20 15:31:15.729179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.047 [2024-11-20 15:31:15.729184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c280) on tqpair=0x21da690 00:22:12.047 [2024-11-20 15:31:15.729188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.047 [2024-11-20 15:31:15.729192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c400) on tqpair=0x21da690 00:22:12.047 [2024-11-20 15:31:15.729196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.047 [2024-11-20 15:31:15.729200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.047 [2024-11-20 15:31:15.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.047 [2024-11-20 15:31:15.729214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.047 [2024-11-20 15:31:15.729218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.047 [2024-11-20 15:31:15.729222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.047 [2024-11-20 15:31:15.729228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.047 [2024-11-20 15:31:15.729241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729439] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:12.048 [2024-11-20 15:31:15.729443] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:12.048 [2024-11-20 15:31:15.729451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.729865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.729871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.729874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.729885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.729892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.729898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.729907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.733955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.733963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.733966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.733969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.733979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.733982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.733986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21da690) 00:22:12.048 [2024-11-20 15:31:15.733991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.048 [2024-11-20 15:31:15.734002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223c580, cid 3, qid 0 00:22:12.048 [2024-11-20 15:31:15.734135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.734141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.734144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.734147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223c580) on tqpair=0x21da690 00:22:12.048 [2024-11-20 15:31:15.734154] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:12.048 00:22:12.048 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:12.048 [2024-11-20 15:31:15.773187] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:12.048 [2024-11-20 15:31:15.773221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243753 ] 00:22:12.048 [2024-11-20 15:31:15.812670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:12.048 [2024-11-20 15:31:15.812713] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.048 [2024-11-20 15:31:15.812718] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.048 [2024-11-20 15:31:15.812732] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.048 [2024-11-20 15:31:15.812741] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.048 [2024-11-20 15:31:15.816200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:12.048 [2024-11-20 15:31:15.816229] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xaef690 0 00:22:12.048 [2024-11-20 15:31:15.830958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.048 [2024-11-20 15:31:15.830972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.048 [2024-11-20 15:31:15.830977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.048 [2024-11-20 15:31:15.830980] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.048 [2024-11-20 15:31:15.831007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.831012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.831015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.048 [2024-11-20 15:31:15.831026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.048 [2024-11-20 15:31:15.831043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.048 [2024-11-20 15:31:15.837958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.048 [2024-11-20 15:31:15.837967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.048 [2024-11-20 15:31:15.837970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.048 [2024-11-20 15:31:15.837974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.048 [2024-11-20 15:31:15.837985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.048 [2024-11-20 15:31:15.837991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:12.048 [2024-11-20 15:31:15.837996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:12.049 [2024-11-20 15:31:15.838007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:12.049 [2024-11-20 15:31:15.838192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:12.049 [2024-11-20 15:31:15.838198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:12.049 [2024-11-20 15:31:15.838315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838529] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:12.049 [2024-11-20 15:31:15.838533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838647] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:12.049 [2024-11-20 15:31:15.838652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.049 [2024-11-20 15:31:15.838767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.838852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.838858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.838861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.838868] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.049 [2024-11-20 15:31:15.838873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:12.049 [2024-11-20 15:31:15.838879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:12.049 [2024-11-20 15:31:15.838888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.049 [2024-11-20 15:31:15.838896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.838900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.049 [2024-11-20 15:31:15.838906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.049 [2024-11-20 15:31:15.838916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.049 [2024-11-20 15:31:15.839031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.049 [2024-11-20 15:31:15.839036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.049 [2024-11-20 15:31:15.839040] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839043] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=4096, cccid=0 00:22:12.049 [2024-11-20 15:31:15.839047] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51100) on tqpair(0xaef690): expected_datao=0, payload_size=4096 00:22:12.049 [2024-11-20 15:31:15.839051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.049 [2024-11-20 15:31:15.839080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.049 [2024-11-20 15:31:15.839083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.049 [2024-11-20 15:31:15.839093] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:12.049 [2024-11-20 15:31:15.839097] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:12.049 [2024-11-20 15:31:15.839101] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:12.049 [2024-11-20 15:31:15.839107] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:12.049 [2024-11-20 15:31:15.839111] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:12.049 [2024-11-20 15:31:15.839115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:12.049 [2024-11-20 15:31:15.839124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.049 [2024-11-20 15:31:15.839130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.049 [2024-11-20 15:31:15.839137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.050 [2024-11-20 15:31:15.839153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.050 [2024-11-20 15:31:15.839220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.050 [2024-11-20 15:31:15.839225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.050 [2024-11-20 15:31:15.839228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.050 [2024-11-20 15:31:15.839237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.050 [2024-11-20 15:31:15.839254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.050 [2024-11-20 15:31:15.839271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.050 [2024-11-20 15:31:15.839287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.050 [2024-11-20 15:31:15.839304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.050 [2024-11-20 15:31:15.839337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51100, cid 0, qid 0 00:22:12.050 [2024-11-20 15:31:15.839341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51280, cid 1, qid 0 00:22:12.050 [2024-11-20 15:31:15.839346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51400, cid 2, qid 0 00:22:12.050 [2024-11-20 15:31:15.839350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.050 [2024-11-20 15:31:15.839354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.050 [2024-11-20 15:31:15.839452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.050 [2024-11-20 15:31:15.839457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.050 [2024-11-20 15:31:15.839461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.050 [2024-11-20 15:31:15.839470] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:12.050 [2024-11-20 15:31:15.839474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.050 [2024-11-20 15:31:15.839514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.050 [2024-11-20 15:31:15.839579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.050 [2024-11-20 15:31:15.839585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.050 [2024-11-20 15:31:15.839588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.050 [2024-11-20 15:31:15.839643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.050 [2024-11-20 15:31:15.839680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.050 [2024-11-20 15:31:15.839756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.050 [2024-11-20 15:31:15.839762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.050 [2024-11-20 15:31:15.839765] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=4096, cccid=4 00:22:12.050 [2024-11-20 15:31:15.839772] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51700) on tqpair(0xaef690): expected_datao=0, payload_size=4096 00:22:12.050 [2024-11-20 15:31:15.839776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.050 [2024-11-20 15:31:15.839801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.050 [2024-11-20 15:31:15.839804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.050 [2024-11-20 15:31:15.839815] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:12.050 [2024-11-20 15:31:15.839827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.839842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.839851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.050 [2024-11-20 15:31:15.839861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.050 [2024-11-20 15:31:15.839942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.050 [2024-11-20 15:31:15.839955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.050 [2024-11-20 15:31:15.839958] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839962] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=4096, cccid=4 00:22:12.050 [2024-11-20 15:31:15.839965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51700) on tqpair(0xaef690): expected_datao=0, payload_size=4096 00:22:12.050 [2024-11-20 15:31:15.839969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839975] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.839993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.050 [2024-11-20 15:31:15.839998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.050 [2024-11-20 15:31:15.840001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.050 [2024-11-20 15:31:15.840017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.840026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:12.050 [2024-11-20 15:31:15.840032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.050 [2024-11-20 15:31:15.840041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.050 [2024-11-20 15:31:15.840052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.050 [2024-11-20 15:31:15.840129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.050 [2024-11-20 15:31:15.840134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.050 [2024-11-20 15:31:15.840137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=4096, cccid=4 00:22:12.050 [2024-11-20 15:31:15.840145] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51700) on tqpair(0xaef690): expected_datao=0, payload_size=4096 00:22:12.050 [2024-11-20 15:31:15.840148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840154] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.050 [2024-11-20 15:31:15.840169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840222] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:12.051 [2024-11-20 15:31:15.840226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:12.051 [2024-11-20 15:31:15.840230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:12.051 [2024-11-20 15:31:15.840242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.051 [2024-11-20 15:31:15.840284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.051 [2024-11-20 15:31:15.840289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51880, cid 5, qid 0 00:22:12.051 [2024-11-20 15:31:15.840365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51880) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51880, cid 5, qid 0 00:22:12.051 [2024-11-20 15:31:15.840482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51880) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51880, cid 5, qid 0 00:22:12.051 [2024-11-20 15:31:15.840590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51880) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51880, cid 5, qid 0 00:22:12.051 [2024-11-20 15:31:15.840688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.051 [2024-11-20 15:31:15.840694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.051 [2024-11-20 15:31:15.840697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51880) on tqpair=0xaef690 00:22:12.051 [2024-11-20 15:31:15.840716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaef690) 00:22:12.051 [2024-11-20 15:31:15.840768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.051 [2024-11-20 15:31:15.840779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51880, cid 5, qid 0 00:22:12.051 [2024-11-20 15:31:15.840783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51700, cid 4, qid 0 00:22:12.051 [2024-11-20 15:31:15.840787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51a00, cid 6, qid 0 00:22:12.051 [2024-11-20 15:31:15.840791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51b80, cid 7, qid 0 00:22:12.051 [2024-11-20 15:31:15.840929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.051 [2024-11-20 15:31:15.840935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.051 [2024-11-20 15:31:15.840938] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840941] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=8192, cccid=5 00:22:12.051 [2024-11-20 15:31:15.840945] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51880) on tqpair(0xaef690): expected_datao=0, payload_size=8192 00:22:12.051 [2024-11-20 15:31:15.840954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840966] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840970] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.051 [2024-11-20 15:31:15.840983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.051 [2024-11-20 15:31:15.840986] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.840989] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=512, cccid=4 00:22:12.051 [2024-11-20 15:31:15.840993] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51700) on tqpair(0xaef690): expected_datao=0, payload_size=512 00:22:12.051 [2024-11-20 15:31:15.840997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.841002] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.841005] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.051 [2024-11-20 15:31:15.841010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.051 [2024-11-20 15:31:15.841016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.052 [2024-11-20 15:31:15.841019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=512, cccid=6 00:22:12.052 [2024-11-20 15:31:15.841026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51a00) on tqpair(0xaef690): expected_datao=0, payload_size=512 00:22:12.052 [2024-11-20 15:31:15.841030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841039] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.052 [2024-11-20 15:31:15.841048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.052 [2024-11-20 15:31:15.841051] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841054] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaef690): datao=0, datal=4096, cccid=7 00:22:12.052 [2024-11-20 15:31:15.841058] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb51b80) on tqpair(0xaef690): expected_datao=0, payload_size=4096 00:22:12.052 [2024-11-20 15:31:15.841062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841071] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.052 [2024-11-20 15:31:15.841083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.052 [2024-11-20 15:31:15.841086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51880) on tqpair=0xaef690 00:22:12.052 [2024-11-20 15:31:15.841099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.052 [2024-11-20 15:31:15.841105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.052 [2024-11-20 15:31:15.841108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51700) on tqpair=0xaef690 00:22:12.052 [2024-11-20 15:31:15.841119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.052 [2024-11-20 15:31:15.841125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.052 [2024-11-20 15:31:15.841127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51a00) on tqpair=0xaef690 00:22:12.052 [2024-11-20 15:31:15.841137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.052 [2024-11-20 15:31:15.841142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.052 [2024-11-20 15:31:15.841145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.052 [2024-11-20 15:31:15.841148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51b80) on tqpair=0xaef690 00:22:12.052 ===================================================== 00:22:12.052 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.052 ===================================================== 00:22:12.052 Controller Capabilities/Features 00:22:12.052 ================================ 00:22:12.052 Vendor ID: 8086 00:22:12.052 Subsystem Vendor ID: 8086 00:22:12.052 Serial Number: SPDK00000000000001 00:22:12.052 Model Number: SPDK bdev Controller 00:22:12.052 Firmware Version: 25.01 00:22:12.052 Recommended Arb Burst: 6 00:22:12.052 IEEE OUI Identifier: e4 d2 5c 00:22:12.052 Multi-path I/O 00:22:12.052 May have multiple subsystem ports: Yes 00:22:12.052 May have multiple controllers: Yes 00:22:12.052 Associated with SR-IOV VF: No 00:22:12.052 Max Data Transfer Size: 131072 00:22:12.052 Max Number of Namespaces: 32 00:22:12.052 Max Number of I/O Queues: 127 00:22:12.052 NVMe Specification Version (VS): 1.3 00:22:12.052 NVMe Specification Version (Identify): 1.3 00:22:12.052 Maximum Queue Entries: 128 00:22:12.052 Contiguous Queues Required: Yes 00:22:12.052 Arbitration Mechanisms Supported 00:22:12.052 Weighted Round Robin: Not Supported 00:22:12.052 Vendor Specific: Not Supported 00:22:12.052 Reset Timeout: 15000 ms 00:22:12.052 Doorbell Stride: 4 bytes 00:22:12.052 NVM Subsystem Reset: Not Supported 00:22:12.052 Command Sets Supported 00:22:12.052 NVM Command Set: Supported 00:22:12.052 Boot Partition: Not Supported 00:22:12.052 Memory Page Size Minimum: 4096 bytes 00:22:12.052 Memory Page Size Maximum: 4096 bytes 00:22:12.052 Persistent Memory Region: Not Supported 00:22:12.052 Optional Asynchronous Events Supported 00:22:12.052 Namespace Attribute Notices: Supported 00:22:12.052 Firmware Activation Notices: Not Supported 00:22:12.052 ANA Change Notices: Not Supported 00:22:12.052 PLE Aggregate Log Change Notices: Not Supported 00:22:12.052 LBA Status Info Alert Notices: Not Supported 00:22:12.052 EGE Aggregate Log Change Notices: Not Supported 00:22:12.052 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.052 Zone Descriptor Change Notices: Not Supported 00:22:12.052 Discovery Log Change Notices: Not Supported 00:22:12.052 Controller Attributes 00:22:12.052 128-bit Host Identifier: Supported 00:22:12.052 Non-Operational Permissive Mode: Not Supported 00:22:12.052 NVM Sets: Not Supported 00:22:12.052 Read Recovery Levels: Not Supported 00:22:12.052 Endurance Groups: Not Supported 00:22:12.052 Predictable Latency Mode: Not Supported 00:22:12.052 Traffic Based Keep ALive: Not Supported 00:22:12.052 Namespace Granularity: Not Supported 00:22:12.052 SQ Associations: Not Supported 00:22:12.052 UUID List: Not Supported 00:22:12.052 Multi-Domain Subsystem: Not Supported 00:22:12.052 Fixed Capacity Management: Not Supported 00:22:12.052 Variable Capacity Management: Not Supported 00:22:12.052 Delete Endurance Group: Not Supported 00:22:12.052 Delete NVM Set: Not Supported 00:22:12.052 Extended LBA Formats Supported: Not Supported 00:22:12.052 Flexible Data Placement Supported: Not Supported 00:22:12.052 00:22:12.052 Controller Memory Buffer Support 00:22:12.052 ================================ 00:22:12.052 Supported: No 00:22:12.052 00:22:12.052 Persistent Memory Region Support 00:22:12.052 ================================ 00:22:12.052 Supported: No 00:22:12.052 00:22:12.052 Admin Command Set Attributes 00:22:12.052 ============================ 00:22:12.052 Security Send/Receive: Not Supported 00:22:12.052 Format NVM: Not Supported 00:22:12.052 Firmware Activate/Download: Not Supported 00:22:12.052 Namespace Management: Not Supported 00:22:12.052 Device Self-Test: Not Supported 00:22:12.052 Directives: Not Supported 00:22:12.052 NVMe-MI: Not Supported 00:22:12.052 Virtualization Management: Not Supported 00:22:12.052 Doorbell Buffer Config: Not Supported 00:22:12.052 Get LBA Status Capability: Not Supported 00:22:12.052 Command & Feature Lockdown Capability: Not Supported 00:22:12.052 Abort Command Limit: 4 00:22:12.052 Async Event Request Limit: 4 00:22:12.052 Number of Firmware Slots: N/A 00:22:12.052 Firmware Slot 1 Read-Only: N/A 00:22:12.052 Firmware Activation Without Reset: N/A 00:22:12.052 Multiple Update Detection Support: N/A 00:22:12.052 Firmware Update Granularity: No Information Provided 00:22:12.052 Per-Namespace SMART Log: No 00:22:12.052 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.052 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:12.052 Command Effects Log Page: Supported 00:22:12.052 Get Log Page Extended Data: Supported 00:22:12.052 Telemetry Log Pages: Not Supported 00:22:12.052 Persistent Event Log Pages: Not Supported 00:22:12.052 Supported Log Pages Log Page: May Support 00:22:12.052 Commands Supported & Effects Log Page: Not Supported 00:22:12.052 Feature Identifiers & Effects Log Page:May Support 00:22:12.052 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.052 Data Area 4 for Telemetry Log: Not Supported 00:22:12.052 Error Log Page Entries Supported: 128 00:22:12.052 Keep Alive: Supported 00:22:12.052 Keep Alive Granularity: 10000 ms 00:22:12.052 00:22:12.052 NVM Command Set Attributes 00:22:12.052 ========================== 00:22:12.052 Submission Queue Entry Size 00:22:12.052 Max: 64 00:22:12.052 Min: 64 00:22:12.052 Completion Queue Entry Size 00:22:12.052 Max: 16 00:22:12.052 Min: 16 00:22:12.052 Number of Namespaces: 32 00:22:12.052 Compare Command: Supported 00:22:12.052 Write Uncorrectable Command: Not Supported 00:22:12.052 Dataset Management Command: Supported 00:22:12.052 Write Zeroes Command: Supported 00:22:12.052 Set Features Save Field: Not Supported 00:22:12.052 Reservations: Supported 00:22:12.052 Timestamp: Not Supported 00:22:12.052 Copy: Supported 00:22:12.052 Volatile Write Cache: Present 00:22:12.052 Atomic Write Unit (Normal): 1 00:22:12.052 Atomic Write Unit (PFail): 1 00:22:12.052 Atomic Compare & Write Unit: 1 00:22:12.052 Fused Compare & Write: Supported 00:22:12.052 Scatter-Gather List 00:22:12.052 SGL Command Set: Supported 00:22:12.052 SGL Keyed: Supported 00:22:12.052 SGL Bit Bucket Descriptor: Not Supported 00:22:12.053 SGL Metadata Pointer: Not Supported 00:22:12.053 Oversized SGL: Not Supported 00:22:12.053 SGL Metadata Address: Not Supported 00:22:12.053 SGL Offset: Supported 00:22:12.053 Transport SGL Data Block: Not Supported 00:22:12.053 Replay Protected Memory Block: Not Supported 00:22:12.053 00:22:12.053 Firmware Slot Information 00:22:12.053 ========================= 00:22:12.053 Active slot: 1 00:22:12.053 Slot 1 Firmware Revision: 25.01 00:22:12.053 00:22:12.053 00:22:12.053 Commands Supported and Effects 00:22:12.053 ============================== 00:22:12.053 Admin Commands 00:22:12.053 -------------- 00:22:12.053 Get Log Page (02h): Supported 00:22:12.053 Identify (06h): Supported 00:22:12.053 Abort (08h): Supported 00:22:12.053 Set Features (09h): Supported 00:22:12.053 Get Features (0Ah): Supported 00:22:12.053 Asynchronous Event Request (0Ch): Supported 00:22:12.053 Keep Alive (18h): Supported 00:22:12.053 I/O Commands 00:22:12.053 ------------ 00:22:12.053 Flush (00h): Supported LBA-Change 00:22:12.053 Write (01h): Supported LBA-Change 00:22:12.053 Read (02h): Supported 00:22:12.053 Compare (05h): Supported 00:22:12.053 Write Zeroes (08h): Supported LBA-Change 00:22:12.053 Dataset Management (09h): Supported LBA-Change 00:22:12.053 Copy (19h): Supported LBA-Change 00:22:12.053 00:22:12.053 Error Log 00:22:12.053 ========= 00:22:12.053 00:22:12.053 Arbitration 00:22:12.053 =========== 00:22:12.053 Arbitration Burst: 1 00:22:12.053 00:22:12.053 Power Management 00:22:12.053 ================ 00:22:12.053 Number of Power States: 1 00:22:12.053 Current Power State: Power State #0 00:22:12.053 Power State #0: 00:22:12.053 Max Power: 0.00 W 00:22:12.053 Non-Operational State: Operational 00:22:12.053 Entry Latency: Not Reported 00:22:12.053 Exit Latency: Not Reported 00:22:12.053 Relative Read Throughput: 0 00:22:12.053 Relative Read Latency: 0 00:22:12.053 Relative Write Throughput: 0 00:22:12.053 Relative Write Latency: 0 00:22:12.053 Idle Power: Not Reported 00:22:12.053 Active Power: Not Reported 00:22:12.053 Non-Operational Permissive Mode: Not Supported 00:22:12.053 00:22:12.053 Health Information 00:22:12.053 ================== 00:22:12.053 Critical Warnings: 00:22:12.053 Available Spare Space: OK 00:22:12.053 Temperature: OK 00:22:12.053 Device Reliability: OK 00:22:12.053 Read Only: No 00:22:12.053 Volatile Memory Backup: OK 00:22:12.053 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:12.053 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:12.053 Available Spare: 0% 00:22:12.053 Available Spare Threshold: 0% 00:22:12.053 Life Percentage Used:[2024-11-20 15:31:15.841231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51b80, cid 7, qid 0 00:22:12.053 [2024-11-20 15:31:15.841330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.841336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.841339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51b80) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841372] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:12.053 [2024-11-20 15:31:15.841382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51100) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.053 [2024-11-20 15:31:15.841392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51280) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.053 [2024-11-20 15:31:15.841401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51400) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.053 [2024-11-20 15:31:15.841409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.053 [2024-11-20 15:31:15.841420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.841514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.841520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.841523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.841631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.841637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.841640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841648] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:12.053 [2024-11-20 15:31:15.841652] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:12.053 [2024-11-20 15:31:15.841660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.841748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.841754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.841757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.841857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.841863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.841866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.841877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.841884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.841889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.841899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.845956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.053 [2024-11-20 15:31:15.845963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.053 [2024-11-20 15:31:15.845966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.845970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.053 [2024-11-20 15:31:15.845979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.845982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.053 [2024-11-20 15:31:15.845985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaef690) 00:22:12.053 [2024-11-20 15:31:15.845991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.053 [2024-11-20 15:31:15.846002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb51580, cid 3, qid 0 00:22:12.053 [2024-11-20 15:31:15.846132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.054 [2024-11-20 15:31:15.846138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.054 [2024-11-20 15:31:15.846141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.054 [2024-11-20 15:31:15.846144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb51580) on tqpair=0xaef690 00:22:12.054 [2024-11-20 15:31:15.846151] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:12.054 0% 00:22:12.054 Data Units Read: 0 00:22:12.054 Data Units Written: 0 00:22:12.054 Host Read Commands: 0 00:22:12.054 Host Write Commands: 0 00:22:12.054 Controller Busy Time: 0 minutes 00:22:12.054 Power Cycles: 0 00:22:12.054 Power On Hours: 0 hours 00:22:12.054 Unsafe Shutdowns: 0 00:22:12.054 Unrecoverable Media Errors: 0 00:22:12.054 Lifetime Error Log Entries: 0 00:22:12.054 Warning Temperature Time: 0 minutes 00:22:12.054 Critical Temperature Time: 0 minutes 00:22:12.054 00:22:12.054 Number of Queues 00:22:12.054 ================ 00:22:12.054 Number of I/O Submission Queues: 127 00:22:12.054 Number of I/O Completion Queues: 127 00:22:12.054 00:22:12.054 Active Namespaces 00:22:12.054 ================= 00:22:12.054 Namespace ID:1 00:22:12.054 Error Recovery Timeout: Unlimited 00:22:12.054 Command Set Identifier: NVM (00h) 00:22:12.054 Deallocate: Supported 00:22:12.054 Deallocated/Unwritten Error: Not Supported 00:22:12.054 Deallocated Read Value: Unknown 00:22:12.054 Deallocate in Write Zeroes: Not Supported 00:22:12.054 Deallocated Guard Field: 0xFFFF 00:22:12.054 Flush: Supported 00:22:12.054 Reservation: Supported 00:22:12.054 Namespace Sharing Capabilities: Multiple Controllers 00:22:12.054 Size (in LBAs): 131072 (0GiB) 00:22:12.054 Capacity (in LBAs): 131072 (0GiB) 00:22:12.054 Utilization (in LBAs): 131072 (0GiB) 00:22:12.054 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:12.054 EUI64: ABCDEF0123456789 00:22:12.054 UUID: 60cb7eb3-2b02-40f1-8671-403f11afe29e 00:22:12.054 Thin Provisioning: Not Supported 00:22:12.054 Per-NS Atomic Units: Yes 00:22:12.054 Atomic Boundary Size (Normal): 0 00:22:12.054 Atomic Boundary Size (PFail): 0 00:22:12.054 Atomic Boundary Offset: 0 00:22:12.054 Maximum Single Source Range Length: 65535 00:22:12.054 Maximum Copy Length: 65535 00:22:12.054 Maximum Source Range Count: 1 00:22:12.054 NGUID/EUI64 Never Reused: No 00:22:12.054 Namespace Write Protected: No 00:22:12.054 Number of LBA Formats: 1 00:22:12.054 Current LBA Format: LBA Format #00 00:22:12.054 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:12.054 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.054 rmmod nvme_tcp 00:22:12.054 rmmod nvme_fabrics 00:22:12.054 rmmod nvme_keyring 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2243511 ']' 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2243511 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2243511 ']' 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2243511 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:12.054 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2243511 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2243511' 00:22:12.313 killing process with pid 2243511 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2243511 00:22:12.313 15:31:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2243511 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.313 15:31:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.850 00:22:14.850 real 0m9.308s 00:22:14.850 user 0m5.343s 00:22:14.850 sys 0m4.878s 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.850 ************************************ 00:22:14.850 END TEST nvmf_identify 00:22:14.850 ************************************ 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.850 ************************************ 00:22:14.850 START TEST nvmf_perf 00:22:14.850 ************************************ 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:14.850 * Looking for test storage... 00:22:14.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.850 --rc genhtml_branch_coverage=1 00:22:14.850 --rc genhtml_function_coverage=1 00:22:14.850 --rc genhtml_legend=1 00:22:14.850 --rc geninfo_all_blocks=1 00:22:14.850 --rc geninfo_unexecuted_blocks=1 00:22:14.850 00:22:14.850 ' 00:22:14.850 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.851 --rc genhtml_branch_coverage=1 00:22:14.851 --rc genhtml_function_coverage=1 00:22:14.851 --rc genhtml_legend=1 00:22:14.851 --rc geninfo_all_blocks=1 00:22:14.851 --rc geninfo_unexecuted_blocks=1 00:22:14.851 00:22:14.851 ' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.851 --rc genhtml_branch_coverage=1 00:22:14.851 --rc genhtml_function_coverage=1 00:22:14.851 --rc genhtml_legend=1 00:22:14.851 --rc geninfo_all_blocks=1 00:22:14.851 --rc geninfo_unexecuted_blocks=1 00:22:14.851 00:22:14.851 ' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.851 --rc genhtml_branch_coverage=1 00:22:14.851 --rc genhtml_function_coverage=1 00:22:14.851 --rc genhtml_legend=1 00:22:14.851 --rc geninfo_all_blocks=1 00:22:14.851 --rc geninfo_unexecuted_blocks=1 00:22:14.851 00:22:14.851 ' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.851 15:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.422 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.422 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.422 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:22:21.423 00:22:21.423 --- 10.0.0.2 ping statistics --- 00:22:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.423 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:22:21.423 00:22:21.423 --- 10.0.0.1 ping statistics --- 00:22:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.423 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2247274 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2247274 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2247274 ']' 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.423 [2024-11-20 15:31:24.557721] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:21.423 [2024-11-20 15:31:24.557771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.423 [2024-11-20 15:31:24.637700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.423 [2024-11-20 15:31:24.681056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.423 [2024-11-20 15:31:24.681093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.423 [2024-11-20 15:31:24.681100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.423 [2024-11-20 15:31:24.681106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.423 [2024-11-20 15:31:24.681111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.423 [2024-11-20 15:31:24.682655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.423 [2024-11-20 15:31:24.682694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.423 [2024-11-20 15:31:24.682801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.423 [2024-11-20 15:31:24.682802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:21.423 15:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:23.954 15:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:23.954 15:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:24.212 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:24.212 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.470 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:24.470 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:24.470 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:24.470 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:24.470 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.728 [2024-11-20 15:31:28.464180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.728 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.986 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:24.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.245 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:25.245 15:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:25.245 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.502 [2024-11-20 15:31:29.267163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.502 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:25.761 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:25.761 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:25.761 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:25.761 15:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:26.889 Initializing NVMe Controllers 00:22:26.889 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:26.889 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:26.889 Initialization complete. Launching workers. 00:22:26.890 ======================================================== 00:22:26.890 Latency(us) 00:22:26.890 Device Information : IOPS MiB/s Average min max 00:22:26.890 PCIE (0000:5e:00.0) NSID 1 from core 0: 96838.06 378.27 330.12 25.73 4414.93 00:22:26.890 ======================================================== 00:22:26.890 Total : 96838.06 378.27 330.12 25.73 4414.93 00:22:26.890 00:22:26.890 15:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.265 Initializing NVMe Controllers 00:22:28.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.265 Initialization complete. Launching workers. 00:22:28.265 ======================================================== 00:22:28.265 Latency(us) 00:22:28.265 Device Information : IOPS MiB/s Average min max 00:22:28.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8886.92 111.00 45620.04 00:22:28.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.00 0.27 15009.94 4955.36 47898.93 00:22:28.265 ======================================================== 00:22:28.265 Total : 184.00 0.72 11149.77 111.00 47898.93 00:22:28.265 00:22:28.265 15:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.640 Initializing NVMe Controllers 00:22:29.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.640 Initialization complete. Launching workers. 00:22:29.640 ======================================================== 00:22:29.640 Latency(us) 00:22:29.640 Device Information : IOPS MiB/s Average min max 00:22:29.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10970.65 42.85 2915.49 422.81 8180.95 00:22:29.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3853.50 15.05 8330.07 5281.01 16062.98 00:22:29.640 ======================================================== 00:22:29.640 Total : 14824.15 57.91 4323.00 422.81 16062.98 00:22:29.640 00:22:29.640 15:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:29.640 15:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:29.640 15:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.171 Initializing NVMe Controllers 00:22:32.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.171 Controller IO queue size 128, less than required. 00:22:32.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.171 Controller IO queue size 128, less than required. 00:22:32.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.171 Initialization complete. Launching workers. 00:22:32.171 ======================================================== 00:22:32.171 Latency(us) 00:22:32.171 Device Information : IOPS MiB/s Average min max 00:22:32.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1769.27 442.32 73273.57 41404.80 112423.06 00:22:32.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 622.57 155.64 211058.40 72657.07 325686.64 00:22:32.171 ======================================================== 00:22:32.171 Total : 2391.84 597.96 109137.35 41404.80 325686.64 00:22:32.171 00:22:32.171 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:32.430 No valid NVMe controllers or AIO or URING devices found 00:22:32.430 Initializing NVMe Controllers 00:22:32.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.430 Controller IO queue size 128, less than required. 00:22:32.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:32.430 Controller IO queue size 128, less than required. 00:22:32.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:32.430 WARNING: Some requested NVMe devices were skipped 00:22:32.430 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:35.717 Initializing NVMe Controllers 00:22:35.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.717 Controller IO queue size 128, less than required. 00:22:35.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.717 Controller IO queue size 128, less than required. 00:22:35.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.717 Initialization complete. Launching workers. 00:22:35.717 00:22:35.717 ==================== 00:22:35.717 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:35.717 TCP transport: 00:22:35.717 polls: 11155 00:22:35.717 idle_polls: 7951 00:22:35.717 sock_completions: 3204 00:22:35.717 nvme_completions: 6119 00:22:35.717 submitted_requests: 9166 00:22:35.717 queued_requests: 1 00:22:35.717 00:22:35.717 ==================== 00:22:35.717 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:35.717 TCP transport: 00:22:35.717 polls: 11258 00:22:35.717 idle_polls: 7447 00:22:35.717 sock_completions: 3811 00:22:35.717 nvme_completions: 6801 00:22:35.717 submitted_requests: 10206 00:22:35.717 queued_requests: 1 00:22:35.717 ======================================================== 00:22:35.717 Latency(us) 00:22:35.717 Device Information : IOPS MiB/s Average min max 00:22:35.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1529.30 382.33 86067.35 48057.23 131380.49 00:22:35.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1699.78 424.95 75744.45 47674.70 127150.48 00:22:35.717 ======================================================== 00:22:35.717 Total : 3229.09 807.27 80633.40 47674.70 131380.49 00:22:35.717 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:35.717 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.718 rmmod nvme_tcp 00:22:35.718 rmmod nvme_fabrics 00:22:35.718 rmmod nvme_keyring 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2247274 ']' 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2247274 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2247274 ']' 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2247274 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247274 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247274' 00:22:35.718 killing process with pid 2247274 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2247274 00:22:35.718 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2247274 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.095 15:31:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.999 15:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.259 00:22:39.259 real 0m24.587s 00:22:39.259 user 1m4.257s 00:22:39.259 sys 0m8.344s 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:39.259 ************************************ 00:22:39.259 END TEST nvmf_perf 00:22:39.259 ************************************ 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.259 ************************************ 00:22:39.259 START TEST nvmf_fio_host 00:22:39.259 ************************************ 00:22:39.259 15:31:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.259 * Looking for test storage... 00:22:39.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.259 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.260 --rc genhtml_branch_coverage=1 00:22:39.260 --rc genhtml_function_coverage=1 00:22:39.260 --rc genhtml_legend=1 00:22:39.260 --rc geninfo_all_blocks=1 00:22:39.260 --rc geninfo_unexecuted_blocks=1 00:22:39.260 00:22:39.260 ' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.260 --rc genhtml_branch_coverage=1 00:22:39.260 --rc genhtml_function_coverage=1 00:22:39.260 --rc genhtml_legend=1 00:22:39.260 --rc geninfo_all_blocks=1 00:22:39.260 --rc geninfo_unexecuted_blocks=1 00:22:39.260 00:22:39.260 ' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.260 --rc genhtml_branch_coverage=1 00:22:39.260 --rc genhtml_function_coverage=1 00:22:39.260 --rc genhtml_legend=1 00:22:39.260 --rc geninfo_all_blocks=1 00:22:39.260 --rc geninfo_unexecuted_blocks=1 00:22:39.260 00:22:39.260 ' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:39.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.260 --rc genhtml_branch_coverage=1 00:22:39.260 --rc genhtml_function_coverage=1 00:22:39.260 --rc genhtml_legend=1 00:22:39.260 --rc geninfo_all_blocks=1 00:22:39.260 --rc geninfo_unexecuted_blocks=1 00:22:39.260 00:22:39.260 ' 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.260 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.520 15:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.100 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.101 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.101 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.101 15:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:22:46.101 00:22:46.101 --- 10.0.0.2 ping statistics --- 00:22:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.101 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:46.101 00:22:46.101 --- 10.0.0.1 ping statistics --- 00:22:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.101 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2253386 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2253386 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2253386 ']' 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.101 [2024-11-20 15:31:49.146757] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:22:46.101 [2024-11-20 15:31:49.146801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.101 [2024-11-20 15:31:49.225500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.101 [2024-11-20 15:31:49.267771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.101 [2024-11-20 15:31:49.267811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.101 [2024-11-20 15:31:49.267818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.101 [2024-11-20 15:31:49.267827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.101 [2024-11-20 15:31:49.267832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.101 [2024-11-20 15:31:49.269459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.101 [2024-11-20 15:31:49.269497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.101 [2024-11-20 15:31:49.269607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.101 [2024-11-20 15:31:49.269609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.101 [2024-11-20 15:31:49.542257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:46.101 Malloc1 00:22:46.101 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.359 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:46.359 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.618 [2024-11-20 15:31:50.411253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.618 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:46.875 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:46.876 15:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.134 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:47.134 fio-3.35 00:22:47.134 Starting 1 thread 00:22:49.666 00:22:49.666 test: (groupid=0, jobs=1): err= 0: pid=2253835: Wed Nov 20 15:31:53 2024 00:22:49.666 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(91.0MiB/2004msec) 00:22:49.666 slat (nsec): min=1557, max=482702, avg=1949.98, stdev=4414.49 00:22:49.666 clat (usec): min=3749, max=10690, avg=6085.68, stdev=481.10 00:22:49.666 lat (usec): min=3752, max=10692, avg=6087.63, stdev=481.19 00:22:49.667 clat percentiles (usec): 00:22:49.667 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:49.667 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:49.667 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:49.667 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 9241], 99.95th=[10028], 00:22:49.667 | 99.99th=[10552] 00:22:49.667 bw ( KiB/s): min=45696, max=46912, per=99.86%, avg=46422.00, stdev=518.52, samples=4 00:22:49.667 iops : min=11424, max=11728, avg=11605.50, stdev=129.63, samples=4 00:22:49.667 write: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.3MiB/2004msec); 0 zone resets 00:22:49.667 slat (nsec): min=1616, max=380653, avg=2020.37, stdev=3136.24 00:22:49.667 clat (usec): min=3120, max=8955, avg=4933.74, stdev=384.35 00:22:49.667 lat (usec): min=3136, max=8957, avg=4935.76, stdev=384.60 00:22:49.667 clat percentiles (usec): 00:22:49.667 | 1.00th=[ 3982], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:49.667 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:49.667 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:49.667 | 99.00th=[ 5800], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 7963], 00:22:49.667 | 99.99th=[ 8225] 00:22:49.667 bw ( KiB/s): min=45888, max=46496, per=99.99%, avg=46152.00, stdev=252.48, samples=4 00:22:49.667 iops : min=11472, max=11624, avg=11538.00, stdev=63.12, samples=4 00:22:49.667 lat (msec) : 4=0.58%, 10=99.40%, 20=0.03% 00:22:49.667 cpu : usr=64.30%, sys=28.41%, ctx=365, majf=0, minf=3 00:22:49.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:49.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.667 issued rwts: total=23290,23124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.667 00:22:49.667 Run status group 0 (all jobs): 00:22:49.667 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=91.0MiB (95.4MB), run=2004-2004msec 00:22:49.667 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.3MiB (94.7MB), run=2004-2004msec 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.667 15:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:49.925 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:49.925 fio-3.35 00:22:49.925 Starting 1 thread 00:22:51.303 [2024-11-20 15:31:55.177824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8688b0 is same with the state(6) to be set 00:22:51.303 [2024-11-20 15:31:55.177884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8688b0 is same with the state(6) to be set 00:22:52.239 00:22:52.239 test: (groupid=0, jobs=1): err= 0: pid=2254355: Wed Nov 20 15:31:56 2024 00:22:52.239 read: IOPS=10.6k, BW=165MiB/s (173MB/s)(332MiB/2007msec) 00:22:52.239 slat (nsec): min=2426, max=99836, avg=2838.05, stdev=1381.80 00:22:52.239 clat (usec): min=1512, max=12484, avg=6919.40, stdev=1579.14 00:22:52.239 lat (usec): min=1514, max=12487, avg=6922.24, stdev=1579.19 00:22:52.239 clat percentiles (usec): 00:22:52.239 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5538], 00:22:52.239 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7373], 00:22:52.239 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8979], 95.00th=[ 9765], 00:22:52.239 | 99.00th=[11076], 99.50th=[11338], 99.90th=[11994], 99.95th=[12125], 00:22:52.239 | 99.99th=[12387] 00:22:52.239 bw ( KiB/s): min=83936, max=89996, per=50.76%, avg=85891.00, stdev=2781.49, samples=4 00:22:52.239 iops : min= 5246, max= 5624, avg=5368.00, stdev=173.47, samples=4 00:22:52.239 write: IOPS=6294, BW=98.3MiB/s (103MB/s)(175MiB/1778msec); 0 zone resets 00:22:52.239 slat (usec): min=27, max=254, avg=31.58, stdev= 4.67 00:22:52.239 clat (usec): min=2752, max=15081, avg=9073.32, stdev=1538.02 00:22:52.239 lat (usec): min=2785, max=15112, avg=9104.89, stdev=1538.26 00:22:52.239 clat percentiles (usec): 00:22:52.239 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7701], 00:22:52.239 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:22:52.239 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[11731], 00:22:52.239 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13829], 99.95th=[14222], 00:22:52.239 | 99.99th=[15008] 00:22:52.239 bw ( KiB/s): min=85600, max=92135, per=88.42%, avg=89049.75, stdev=2693.00, samples=4 00:22:52.239 iops : min= 5350, max= 5758, avg=5565.50, stdev=168.15, samples=4 00:22:52.239 lat (msec) : 2=0.04%, 4=1.12%, 10=86.95%, 20=11.89% 00:22:52.239 cpu : usr=84.00%, sys=15.30%, ctx=39, majf=0, minf=3 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.239 issued rwts: total=21227,11191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.239 00:22:52.239 Run status group 0 (all jobs): 00:22:52.239 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=332MiB (348MB), run=2007-2007msec 00:22:52.239 WRITE: bw=98.3MiB/s (103MB/s), 98.3MiB/s-98.3MiB/s (103MB/s-103MB/s), io=175MiB (183MB), run=1778-1778msec 00:22:52.239 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.498 rmmod nvme_tcp 00:22:52.498 rmmod nvme_fabrics 00:22:52.498 rmmod nvme_keyring 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2253386 ']' 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2253386 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2253386 ']' 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2253386 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.498 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2253386 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2253386' 00:22:52.757 killing process with pid 2253386 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2253386 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2253386 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.757 15:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.292 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.292 00:22:55.292 real 0m15.715s 00:22:55.292 user 0m46.164s 00:22:55.292 sys 0m6.521s 00:22:55.292 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.292 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.292 ************************************ 00:22:55.292 END TEST nvmf_fio_host 00:22:55.292 ************************************ 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.293 ************************************ 00:22:55.293 START TEST nvmf_failover 00:22:55.293 ************************************ 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.293 * Looking for test storage... 00:22:55.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.293 --rc genhtml_branch_coverage=1 00:22:55.293 --rc genhtml_function_coverage=1 00:22:55.293 --rc genhtml_legend=1 00:22:55.293 --rc geninfo_all_blocks=1 00:22:55.293 --rc geninfo_unexecuted_blocks=1 00:22:55.293 00:22:55.293 ' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.293 --rc genhtml_branch_coverage=1 00:22:55.293 --rc genhtml_function_coverage=1 00:22:55.293 --rc genhtml_legend=1 00:22:55.293 --rc geninfo_all_blocks=1 00:22:55.293 --rc geninfo_unexecuted_blocks=1 00:22:55.293 00:22:55.293 ' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.293 --rc genhtml_branch_coverage=1 00:22:55.293 --rc genhtml_function_coverage=1 00:22:55.293 --rc genhtml_legend=1 00:22:55.293 --rc geninfo_all_blocks=1 00:22:55.293 --rc geninfo_unexecuted_blocks=1 00:22:55.293 00:22:55.293 ' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.293 --rc genhtml_branch_coverage=1 00:22:55.293 --rc genhtml_function_coverage=1 00:22:55.293 --rc genhtml_legend=1 00:22:55.293 --rc geninfo_all_blocks=1 00:22:55.293 --rc geninfo_unexecuted_blocks=1 00:22:55.293 00:22:55.293 ' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.293 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.294 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:01.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:01.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:01.860 Found net devices under 0000:86:00.0: cvl_0_0 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:01.860 Found net devices under 0000:86:00.1: cvl_0_1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.860 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:23:01.861 00:23:01.861 --- 10.0.0.2 ping statistics --- 00:23:01.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.861 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:01.861 00:23:01.861 --- 10.0.0.1 ping statistics --- 00:23:01.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.861 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2258329 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2258329 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2258329 ']' 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.861 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 [2024-11-20 15:32:04.967723] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:01.861 [2024-11-20 15:32:04.967774] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.861 [2024-11-20 15:32:05.049489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.861 [2024-11-20 15:32:05.091401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.861 [2024-11-20 15:32:05.091437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.861 [2024-11-20 15:32:05.091444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.861 [2024-11-20 15:32:05.091450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.861 [2024-11-20 15:32:05.091455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.861 [2024-11-20 15:32:05.092848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.861 [2024-11-20 15:32:05.092968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.861 [2024-11-20 15:32:05.092969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.861 [2024-11-20 15:32:05.393790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:01.861 Malloc0 00:23:01.861 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.119 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.378 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.378 [2024-11-20 15:32:06.228607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.378 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.637 [2024-11-20 15:32:06.437268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.637 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:02.896 [2024-11-20 15:32:06.625862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2258602 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2258602 /var/tmp/bdevperf.sock 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2258602 ']' 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.896 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.153 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.153 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:03.153 15:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.411 NVMe0n1 00:23:03.411 15:32:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.669 00:23:03.669 15:32:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2258828 00:23:03.669 15:32:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.669 15:32:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:05.045 15:32:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.045 [2024-11-20 15:32:08.761384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.045 [2024-11-20 15:32:08.761739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 [2024-11-20 15:32:08.761814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d42d0 is same with the state(6) to be set 00:23:05.046 15:32:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:08.331 15:32:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.331 00:23:08.331 15:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:08.590 [2024-11-20 15:32:12.329605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.590 [2024-11-20 15:32:12.329739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.591 [2024-11-20 15:32:12.329745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.591 [2024-11-20 15:32:12.329752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.591 [2024-11-20 15:32:12.329757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5060 is same with the state(6) to be set 00:23:08.591 15:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:11.877 15:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.877 [2024-11-20 15:32:15.540316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.877 15:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:12.812 15:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:13.072 15:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2258828 00:23:19.644 { 00:23:19.644 "results": [ 00:23:19.644 { 00:23:19.644 "job": "NVMe0n1", 00:23:19.644 "core_mask": "0x1", 00:23:19.644 "workload": "verify", 00:23:19.644 "status": "finished", 00:23:19.644 "verify_range": { 00:23:19.644 "start": 0, 00:23:19.644 "length": 16384 00:23:19.644 }, 00:23:19.644 "queue_depth": 128, 00:23:19.644 "io_size": 4096, 00:23:19.644 "runtime": 15.005174, 00:23:19.644 "iops": 10895.24186790503, 00:23:19.644 "mibps": 42.55953854650402, 00:23:19.644 "io_failed": 10029, 00:23:19.644 "io_timeout": 0, 00:23:19.644 "avg_latency_us": 11046.975588848614, 00:23:19.644 "min_latency_us": 414.94260869565215, 00:23:19.644 "max_latency_us": 14019.005217391305 00:23:19.644 } 00:23:19.644 ], 00:23:19.644 "core_count": 1 00:23:19.644 } 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2258602 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2258602 ']' 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2258602 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2258602 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2258602' 00:23:19.644 killing process with pid 2258602 00:23:19.644 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2258602 00:23:19.645 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2258602 00:23:19.645 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.645 [2024-11-20 15:32:06.698419] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:19.645 [2024-11-20 15:32:06.698472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258602 ] 00:23:19.645 [2024-11-20 15:32:06.775692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.645 [2024-11-20 15:32:06.819654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.645 Running I/O for 15 seconds... 00:23:19.645 11193.00 IOPS, 43.72 MiB/s [2024-11-20T14:32:23.553Z] [2024-11-20 15:32:08.762500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.762989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.762998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.763004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.763012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.763019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.763027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.763035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.763043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.645 [2024-11-20 15:32:08.763050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.645 [2024-11-20 15:32:08.763058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.646 [2024-11-20 15:32:08.763629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.646 [2024-11-20 15:32:08.763637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.647 [2024-11-20 15:32:08.763957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.763985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.763993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.647 [2024-11-20 15:32:08.764200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.647 [2024-11-20 15:32:08.764206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.648 [2024-11-20 15:32:08.764428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.648 [2024-11-20 15:32:08.764456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:23:19.648 [2024-11-20 15:32:08.764463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.648 [2024-11-20 15:32:08.764477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.648 [2024-11-20 15:32:08.764483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:23:19.648 [2024-11-20 15:32:08.764490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764533] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:19.648 [2024-11-20 15:32:08.764554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:08.764562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:08.764576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:08.764597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:08.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:08.764624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:19.648 [2024-11-20 15:32:08.767476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:19.648 [2024-11-20 15:32:08.767503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b29340 (9): Bad file descriptor 00:23:19.648 [2024-11-20 15:32:08.836385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:19.648 10682.00 IOPS, 41.73 MiB/s [2024-11-20T14:32:23.556Z] 10836.33 IOPS, 42.33 MiB/s [2024-11-20T14:32:23.556Z] 10891.00 IOPS, 42.54 MiB/s [2024-11-20T14:32:23.556Z] [2024-11-20 15:32:12.330362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:12.330394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:12.330415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:12.330434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.648 [2024-11-20 15:32:12.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29340 is same with the state(6) to be set 00:23:19.648 [2024-11-20 15:32:12.330510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.648 [2024-11-20 15:32:12.330631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.648 [2024-11-20 15:32:12.330639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.330985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.330994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.649 [2024-11-20 15:32:12.331128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.649 [2024-11-20 15:32:12.331134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.650 [2024-11-20 15:32:12.331673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.650 [2024-11-20 15:32:12.331710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.650 [2024-11-20 15:32:12.331717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.331985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.651 [2024-11-20 15:32:12.332300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.651 [2024-11-20 15:32:12.332306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:12.332425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.652 [2024-11-20 15:32:12.332455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.652 [2024-11-20 15:32:12.332461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40792 len:8 PRP1 0x0 PRP2 0x0 00:23:19.652 [2024-11-20 15:32:12.332467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:12.332509] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:19.652 [2024-11-20 15:32:12.332519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:19.652 [2024-11-20 15:32:12.335349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:19.652 [2024-11-20 15:32:12.335378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b29340 (9): Bad file descriptor 00:23:19.652 [2024-11-20 15:32:12.399846] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:19.652 10749.60 IOPS, 41.99 MiB/s [2024-11-20T14:32:23.560Z] 10799.00 IOPS, 42.18 MiB/s [2024-11-20T14:32:23.560Z] 10844.86 IOPS, 42.36 MiB/s [2024-11-20T14:32:23.560Z] 10865.75 IOPS, 42.44 MiB/s [2024-11-20T14:32:23.560Z] 10893.00 IOPS, 42.55 MiB/s [2024-11-20T14:32:23.560Z] [2024-11-20 15:32:16.756968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.652 [2024-11-20 15:32:16.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.652 [2024-11-20 15:32:16.757411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.653 [2024-11-20 15:32:16.757668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.653 [2024-11-20 15:32:16.757965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.653 [2024-11-20 15:32:16.757973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.757980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.757988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.757995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.654 [2024-11-20 15:32:16.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.654 [2024-11-20 15:32:16.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.654 [2024-11-20 15:32:16.758055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.654 [2024-11-20 15:32:16.758070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.654 [2024-11-20 15:32:16.758469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.654 [2024-11-20 15:32:16.758477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.655 [2024-11-20 15:32:16.758794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.655 [2024-11-20 15:32:16.758896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84470 is same with the state(6) to be set 00:23:19.655 [2024-11-20 15:32:16.758912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.655 [2024-11-20 15:32:16.758917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.655 [2024-11-20 15:32:16.758923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57480 len:8 PRP1 0x0 PRP2 0x0 00:23:19.655 [2024-11-20 15:32:16.758931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.758978] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:19.655 [2024-11-20 15:32:16.759002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.655 [2024-11-20 15:32:16.759010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.759021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.655 [2024-11-20 15:32:16.759028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.759035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.655 [2024-11-20 15:32:16.759041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.759049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.655 [2024-11-20 15:32:16.759055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.655 [2024-11-20 15:32:16.759062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:19.655 [2024-11-20 15:32:16.761921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:19.655 [2024-11-20 15:32:16.761955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b29340 (9): Bad file descriptor 00:23:19.655 [2024-11-20 15:32:16.831366] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:19.655 10811.10 IOPS, 42.23 MiB/s [2024-11-20T14:32:23.563Z] 10827.64 IOPS, 42.30 MiB/s [2024-11-20T14:32:23.563Z] 10861.58 IOPS, 42.43 MiB/s [2024-11-20T14:32:23.563Z] 10870.00 IOPS, 42.46 MiB/s [2024-11-20T14:32:23.563Z] 10893.64 IOPS, 42.55 MiB/s 00:23:19.655 Latency(us) 00:23:19.655 [2024-11-20T14:32:23.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.655 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.655 Verification LBA range: start 0x0 length 0x4000 00:23:19.655 NVMe0n1 : 15.01 10895.24 42.56 668.37 0.00 11046.98 414.94 14019.01 00:23:19.656 [2024-11-20T14:32:23.564Z] =================================================================================================================== 00:23:19.656 [2024-11-20T14:32:23.564Z] Total : 10895.24 42.56 668.37 0.00 11046.98 414.94 14019.01 00:23:19.656 Received shutdown signal, test time was about 15.000000 seconds 00:23:19.656 00:23:19.656 Latency(us) 00:23:19.656 [2024-11-20T14:32:23.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.656 [2024-11-20T14:32:23.564Z] =================================================================================================================== 00:23:19.656 [2024-11-20T14:32:23.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2261351 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2261351 /var/tmp/bdevperf.sock 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2261351 ']' 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.656 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.656 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.656 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:19.656 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.656 [2024-11-20 15:32:23.373138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.656 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.914 [2024-11-20 15:32:23.561683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.914 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.173 NVMe0n1 00:23:20.173 15:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.431 00:23:20.431 15:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.997 00:23:20.997 15:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.997 15:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:20.997 15:32:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.255 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:24.540 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.540 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:24.540 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2262148 00:23:24.540 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.540 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2262148 00:23:25.476 { 00:23:25.476 "results": [ 00:23:25.476 { 00:23:25.476 "job": "NVMe0n1", 00:23:25.476 "core_mask": "0x1", 00:23:25.476 "workload": "verify", 00:23:25.476 "status": "finished", 00:23:25.476 "verify_range": { 00:23:25.476 "start": 0, 00:23:25.476 "length": 16384 00:23:25.476 }, 00:23:25.476 "queue_depth": 128, 00:23:25.476 "io_size": 4096, 00:23:25.476 "runtime": 1.004036, 00:23:25.476 "iops": 10927.895015716567, 00:23:25.476 "mibps": 42.68708990514284, 00:23:25.476 "io_failed": 0, 00:23:25.476 "io_timeout": 0, 00:23:25.476 "avg_latency_us": 11667.298461538461, 00:23:25.476 "min_latency_us": 2008.8208695652174, 00:23:25.476 "max_latency_us": 14075.99304347826 00:23:25.476 } 00:23:25.476 ], 00:23:25.476 "core_count": 1 00:23:25.476 } 00:23:25.736 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.736 [2024-11-20 15:32:22.997628] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:25.736 [2024-11-20 15:32:22.997682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261351 ] 00:23:25.736 [2024-11-20 15:32:23.071695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.736 [2024-11-20 15:32:23.110422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.737 [2024-11-20 15:32:25.020151] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:25.737 [2024-11-20 15:32:25.020197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.737 [2024-11-20 15:32:25.020208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.737 [2024-11-20 15:32:25.020217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.737 [2024-11-20 15:32:25.020224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.737 [2024-11-20 15:32:25.020232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.737 [2024-11-20 15:32:25.020238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.737 [2024-11-20 15:32:25.020245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.737 [2024-11-20 15:32:25.020252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.737 [2024-11-20 15:32:25.020259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:25.737 [2024-11-20 15:32:25.020284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:25.737 [2024-11-20 15:32:25.020297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea5340 (9): Bad file descriptor 00:23:25.737 [2024-11-20 15:32:25.072038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:25.737 Running I/O for 1 seconds... 00:23:25.737 10844.00 IOPS, 42.36 MiB/s 00:23:25.737 Latency(us) 00:23:25.737 [2024-11-20T14:32:29.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.737 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:25.737 Verification LBA range: start 0x0 length 0x4000 00:23:25.737 NVMe0n1 : 1.00 10927.90 42.69 0.00 0.00 11667.30 2008.82 14075.99 00:23:25.737 [2024-11-20T14:32:29.645Z] =================================================================================================================== 00:23:25.737 [2024-11-20T14:32:29.645Z] Total : 10927.90 42.69 0.00 0.00 11667.30 2008.82 14075.99 00:23:25.737 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.737 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:25.737 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.092 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.092 15:32:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:26.366 15:32:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.366 15:32:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2261351 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2261351 ']' 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2261351 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261351 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261351' 00:23:29.649 killing process with pid 2261351 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2261351 00:23:29.649 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2261351 00:23:29.908 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:29.908 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.166 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.167 rmmod nvme_tcp 00:23:30.167 rmmod nvme_fabrics 00:23:30.167 rmmod nvme_keyring 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2258329 ']' 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2258329 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2258329 ']' 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2258329 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2258329 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2258329' 00:23:30.167 killing process with pid 2258329 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2258329 00:23:30.167 15:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2258329 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.426 15:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.331 15:32:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.331 00:23:32.331 real 0m37.444s 00:23:32.331 user 1m58.509s 00:23:32.331 sys 0m8.001s 00:23:32.331 15:32:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.331 15:32:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.331 ************************************ 00:23:32.331 END TEST nvmf_failover 00:23:32.331 ************************************ 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.590 ************************************ 00:23:32.590 START TEST nvmf_host_discovery 00:23:32.590 ************************************ 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:32.590 * Looking for test storage... 00:23:32.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:32.590 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.591 --rc genhtml_branch_coverage=1 00:23:32.591 --rc genhtml_function_coverage=1 00:23:32.591 --rc genhtml_legend=1 00:23:32.591 --rc geninfo_all_blocks=1 00:23:32.591 --rc geninfo_unexecuted_blocks=1 00:23:32.591 00:23:32.591 ' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.591 --rc genhtml_branch_coverage=1 00:23:32.591 --rc genhtml_function_coverage=1 00:23:32.591 --rc genhtml_legend=1 00:23:32.591 --rc geninfo_all_blocks=1 00:23:32.591 --rc geninfo_unexecuted_blocks=1 00:23:32.591 00:23:32.591 ' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.591 --rc genhtml_branch_coverage=1 00:23:32.591 --rc genhtml_function_coverage=1 00:23:32.591 --rc genhtml_legend=1 00:23:32.591 --rc geninfo_all_blocks=1 00:23:32.591 --rc geninfo_unexecuted_blocks=1 00:23:32.591 00:23:32.591 ' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.591 --rc genhtml_branch_coverage=1 00:23:32.591 --rc genhtml_function_coverage=1 00:23:32.591 --rc genhtml_legend=1 00:23:32.591 --rc geninfo_all_blocks=1 00:23:32.591 --rc geninfo_unexecuted_blocks=1 00:23:32.591 00:23:32.591 ' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.591 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.851 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.416 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.417 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.417 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:39.417 00:23:39.417 --- 10.0.0.2 ping statistics --- 00:23:39.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.417 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:39.417 00:23:39.417 --- 10.0.0.1 ping statistics --- 00:23:39.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.417 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2266520 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2266520 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2266520 ']' 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.417 [2024-11-20 15:32:42.485731] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:39.417 [2024-11-20 15:32:42.485782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.417 [2024-11-20 15:32:42.567285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.417 [2024-11-20 15:32:42.608330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.417 [2024-11-20 15:32:42.608368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.417 [2024-11-20 15:32:42.608376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.417 [2024-11-20 15:32:42.608382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.417 [2024-11-20 15:32:42.608387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.417 [2024-11-20 15:32:42.608961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.417 [2024-11-20 15:32:42.740963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.417 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 [2024-11-20 15:32:42.753138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 null0 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 null1 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2266722 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2266722 /tmp/host.sock 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2266722 ']' 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:39.418 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.418 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 [2024-11-20 15:32:42.830081] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:39.418 [2024-11-20 15:32:42.830126] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266722 ] 00:23:39.418 [2024-11-20 15:32:42.903622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.418 [2024-11-20 15:32:42.946616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.418 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.677 [2024-11-20 15:32:43.354674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:39.677 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:39.678 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:40.245 [2024-11-20 15:32:44.109105] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.245 [2024-11-20 15:32:44.109123] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.245 [2024-11-20 15:32:44.109136] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.503 [2024-11-20 15:32:44.195393] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:40.503 [2024-11-20 15:32:44.249902] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:40.503 [2024-11-20 15:32:44.250560] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d0edd0:1 started. 00:23:40.503 [2024-11-20 15:32:44.251954] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.503 [2024-11-20 15:32:44.251971] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:40.503 [2024-11-20 15:32:44.257442] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d0edd0 was disconnected and freed. delete nvme_qpair. 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:40.761 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.762 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.021 [2024-11-20 15:32:44.897726] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d0f1a0:1 started. 00:23:41.021 [2024-11-20 15:32:44.900008] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d0f1a0 was disconnected and freed. delete nvme_qpair. 00:23:41.021 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 [2024-11-20 15:32:44.979069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.280 [2024-11-20 15:32:44.980026] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.280 [2024-11-20 15:32:44.980044] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 [2024-11-20 15:32:45.067656] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.280 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:41.539 [2024-11-20 15:32:45.367057] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:41.539 [2024-11-20 15:32:45.367091] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.539 [2024-11-20 15:32:45.367099] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.539 [2024-11-20 15:32:45.367104] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.477 [2024-11-20 15:32:46.227422] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:42.477 [2024-11-20 15:32:46.227442] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.477 [2024-11-20 15:32:46.232109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.477 [2024-11-20 15:32:46.232126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.477 [2024-11-20 15:32:46.232134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.477 [2024-11-20 15:32:46.232141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.477 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:42.477 [2024-11-20 15:32:46.232165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.477 [2024-11-20 15:32:46.232172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.477 [2024-11-20 15:32:46.232179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.477 [2024-11-20 15:32:46.232185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.477 [2024-11-20 15:32:46.232196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:42.478 [2024-11-20 15:32:46.242123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.478 [2024-11-20 15:32:46.252157] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.478 [2024-11-20 15:32:46.252169] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.478 [2024-11-20 15:32:46.252173] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.252178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.478 [2024-11-20 15:32:46.252198] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.252477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.478 [2024-11-20 15:32:46.252491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.478 [2024-11-20 15:32:46.252499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.478 [2024-11-20 15:32:46.252510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.478 [2024-11-20 15:32:46.252527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.478 [2024-11-20 15:32:46.252534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.478 [2024-11-20 15:32:46.252541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.478 [2024-11-20 15:32:46.252547] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.478 [2024-11-20 15:32:46.252552] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.478 [2024-11-20 15:32:46.252556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.478 [2024-11-20 15:32:46.262228] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.478 [2024-11-20 15:32:46.262238] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.478 [2024-11-20 15:32:46.262242] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.262246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.478 [2024-11-20 15:32:46.262262] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.262510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.478 [2024-11-20 15:32:46.262522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.478 [2024-11-20 15:32:46.262529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.478 [2024-11-20 15:32:46.262539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.478 [2024-11-20 15:32:46.262553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.478 [2024-11-20 15:32:46.262560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.478 [2024-11-20 15:32:46.262566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.478 [2024-11-20 15:32:46.262571] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.478 [2024-11-20 15:32:46.262576] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.478 [2024-11-20 15:32:46.262579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.478 [2024-11-20 15:32:46.272294] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.478 [2024-11-20 15:32:46.272306] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.478 [2024-11-20 15:32:46.272310] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.272314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.478 [2024-11-20 15:32:46.272328] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.272535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.478 [2024-11-20 15:32:46.272547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.478 [2024-11-20 15:32:46.272554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.478 [2024-11-20 15:32:46.272564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.478 [2024-11-20 15:32:46.272573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.478 [2024-11-20 15:32:46.272579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.478 [2024-11-20 15:32:46.272586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.478 [2024-11-20 15:32:46.272592] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.478 [2024-11-20 15:32:46.272596] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.478 [2024-11-20 15:32:46.272600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:42.478 [2024-11-20 15:32:46.282360] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.478 [2024-11-20 15:32:46.282373] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.478 [2024-11-20 15:32:46.282378] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.478 [2024-11-20 15:32:46.282384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.478 [2024-11-20 15:32:46.282400] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.478 [2024-11-20 15:32:46.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.478 [2024-11-20 15:32:46.282589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.478 [2024-11-20 15:32:46.282596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.478 [2024-11-20 15:32:46.282607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.478 [2024-11-20 15:32:46.282621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.478 [2024-11-20 15:32:46.282628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.478 [2024-11-20 15:32:46.282635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.478 [2024-11-20 15:32:46.282641] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.478 [2024-11-20 15:32:46.282645] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.478 [2024-11-20 15:32:46.282649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.478 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.478 [2024-11-20 15:32:46.292431] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.478 [2024-11-20 15:32:46.292444] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.478 [2024-11-20 15:32:46.292448] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.292453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.479 [2024-11-20 15:32:46.292467] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.292717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.479 [2024-11-20 15:32:46.292734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.479 [2024-11-20 15:32:46.292741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.479 [2024-11-20 15:32:46.292751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.479 [2024-11-20 15:32:46.292761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.479 [2024-11-20 15:32:46.292766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.479 [2024-11-20 15:32:46.292773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.479 [2024-11-20 15:32:46.292778] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.479 [2024-11-20 15:32:46.292783] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.479 [2024-11-20 15:32:46.292786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.479 [2024-11-20 15:32:46.302497] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.479 [2024-11-20 15:32:46.302508] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.479 [2024-11-20 15:32:46.302512] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.302516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.479 [2024-11-20 15:32:46.302528] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.302718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.479 [2024-11-20 15:32:46.302730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.479 [2024-11-20 15:32:46.302737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.479 [2024-11-20 15:32:46.302747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.479 [2024-11-20 15:32:46.302762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.479 [2024-11-20 15:32:46.302769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.479 [2024-11-20 15:32:46.302775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.479 [2024-11-20 15:32:46.302781] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.479 [2024-11-20 15:32:46.302785] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.479 [2024-11-20 15:32:46.302789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.479 [2024-11-20 15:32:46.312559] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.479 [2024-11-20 15:32:46.312570] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.479 [2024-11-20 15:32:46.312574] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.312578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.479 [2024-11-20 15:32:46.312590] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.312703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.479 [2024-11-20 15:32:46.312714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.479 [2024-11-20 15:32:46.312722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.479 [2024-11-20 15:32:46.312732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.479 [2024-11-20 15:32:46.312742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.479 [2024-11-20 15:32:46.312748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.479 [2024-11-20 15:32:46.312754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.479 [2024-11-20 15:32:46.312760] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.479 [2024-11-20 15:32:46.312764] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.479 [2024-11-20 15:32:46.312768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.479 [2024-11-20 15:32:46.322621] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.479 [2024-11-20 15:32:46.322632] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.479 [2024-11-20 15:32:46.322636] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.322639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.479 [2024-11-20 15:32:46.322652] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.322915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.479 [2024-11-20 15:32:46.322927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.479 [2024-11-20 15:32:46.322934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.479 [2024-11-20 15:32:46.322944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.479 [2024-11-20 15:32:46.322964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.479 [2024-11-20 15:32:46.322971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.479 [2024-11-20 15:32:46.322978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.479 [2024-11-20 15:32:46.322983] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.479 [2024-11-20 15:32:46.322988] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.479 [2024-11-20 15:32:46.322992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:42.479 [2024-11-20 15:32:46.332683] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.479 [2024-11-20 15:32:46.332698] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.479 [2024-11-20 15:32:46.332702] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.332706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.479 [2024-11-20 15:32:46.332718] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:42.479 [2024-11-20 15:32:46.332958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.479 [2024-11-20 15:32:46.332973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.479 [2024-11-20 15:32:46.332980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.479 [2024-11-20 15:32:46.332990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.479 [2024-11-20 15:32:46.332999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.479 [2024-11-20 15:32:46.333006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.479 [2024-11-20 15:32:46.333012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.479 [2024-11-20 15:32:46.333017] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.479 [2024-11-20 15:32:46.333022] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.479 [2024-11-20 15:32:46.333025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.479 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:42.479 [2024-11-20 15:32:46.342749] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.479 [2024-11-20 15:32:46.342762] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.479 [2024-11-20 15:32:46.342766] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.479 [2024-11-20 15:32:46.342770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.480 [2024-11-20 15:32:46.342783] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.480 [2024-11-20 15:32:46.342933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.480 [2024-11-20 15:32:46.342952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.480 [2024-11-20 15:32:46.342959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.480 [2024-11-20 15:32:46.342970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.480 [2024-11-20 15:32:46.342979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.480 [2024-11-20 15:32:46.342985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.480 [2024-11-20 15:32:46.342992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.480 [2024-11-20 15:32:46.342998] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.480 [2024-11-20 15:32:46.343002] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.480 [2024-11-20 15:32:46.343006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.480 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.480 [2024-11-20 15:32:46.352813] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.480 [2024-11-20 15:32:46.352825] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.480 [2024-11-20 15:32:46.352829] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.480 [2024-11-20 15:32:46.352833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.480 [2024-11-20 15:32:46.352845] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:42.480 [2024-11-20 15:32:46.353096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.480 [2024-11-20 15:32:46.353109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf390 with addr=10.0.0.2, port=4420 00:23:42.480 [2024-11-20 15:32:46.353116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf390 is same with the state(6) to be set 00:23:42.480 [2024-11-20 15:32:46.353126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf390 (9): Bad file descriptor 00:23:42.480 [2024-11-20 15:32:46.353136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:42.480 [2024-11-20 15:32:46.353142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:42.480 [2024-11-20 15:32:46.353149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:42.480 [2024-11-20 15:32:46.353154] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:42.480 [2024-11-20 15:32:46.353159] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:42.480 [2024-11-20 15:32:46.353163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:42.480 [2024-11-20 15:32:46.354468] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:42.480 [2024-11-20 15:32:46.354483] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.739 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:42.739 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.674 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.932 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.933 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.867 [2024-11-20 15:32:48.691415] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:44.867 [2024-11-20 15:32:48.691432] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:44.867 [2024-11-20 15:32:48.691443] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:45.125 [2024-11-20 15:32:48.820851] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:45.125 [2024-11-20 15:32:48.964678] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:45.125 [2024-11-20 15:32:48.965264] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d08900:1 started. 00:23:45.125 [2024-11-20 15:32:48.966826] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:45.125 [2024-11-20 15:32:48.966851] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:45.125 [2024-11-20 15:32:48.970072] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d08900 was disconnected and freed. delete nvme_qpair. 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.125 request: 00:23:45.125 { 00:23:45.125 "name": "nvme", 00:23:45.125 "trtype": "tcp", 00:23:45.125 "traddr": "10.0.0.2", 00:23:45.125 "adrfam": "ipv4", 00:23:45.125 "trsvcid": "8009", 00:23:45.125 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:45.125 "wait_for_attach": true, 00:23:45.125 "method": "bdev_nvme_start_discovery", 00:23:45.125 "req_id": 1 00:23:45.125 } 00:23:45.125 Got JSON-RPC error response 00:23:45.125 response: 00:23:45.125 { 00:23:45.125 "code": -17, 00:23:45.125 "message": "File exists" 00:23:45.125 } 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.125 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:45.125 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.385 request: 00:23:45.385 { 00:23:45.385 "name": "nvme_second", 00:23:45.385 "trtype": "tcp", 00:23:45.385 "traddr": "10.0.0.2", 00:23:45.385 "adrfam": "ipv4", 00:23:45.385 "trsvcid": "8009", 00:23:45.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:45.385 "wait_for_attach": true, 00:23:45.385 "method": "bdev_nvme_start_discovery", 00:23:45.385 "req_id": 1 00:23:45.385 } 00:23:45.385 Got JSON-RPC error response 00:23:45.385 response: 00:23:45.385 { 00:23:45.385 "code": -17, 00:23:45.385 "message": "File exists" 00:23:45.385 } 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.385 15:32:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 [2024-11-20 15:32:50.202566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.320 [2024-11-20 15:32:50.202601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0aba0 with addr=10.0.0.2, port=8010 00:23:46.320 [2024-11-20 15:32:50.202617] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:46.320 [2024-11-20 15:32:50.202629] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:46.320 [2024-11-20 15:32:50.202635] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:47.708 [2024-11-20 15:32:51.204997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.709 [2024-11-20 15:32:51.205023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d0aba0 with addr=10.0.0.2, port=8010 00:23:47.709 [2024-11-20 15:32:51.205035] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:47.709 [2024-11-20 15:32:51.205041] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:47.709 [2024-11-20 15:32:51.205047] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:48.645 [2024-11-20 15:32:52.207179] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:48.645 request: 00:23:48.645 { 00:23:48.645 "name": "nvme_second", 00:23:48.645 "trtype": "tcp", 00:23:48.645 "traddr": "10.0.0.2", 00:23:48.645 "adrfam": "ipv4", 00:23:48.645 "trsvcid": "8010", 00:23:48.645 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:48.645 "wait_for_attach": false, 00:23:48.645 "attach_timeout_ms": 3000, 00:23:48.645 "method": "bdev_nvme_start_discovery", 00:23:48.645 "req_id": 1 00:23:48.645 } 00:23:48.645 Got JSON-RPC error response 00:23:48.645 response: 00:23:48.645 { 00:23:48.645 "code": -110, 00:23:48.645 "message": "Connection timed out" 00:23:48.645 } 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2266722 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.645 rmmod nvme_tcp 00:23:48.645 rmmod nvme_fabrics 00:23:48.645 rmmod nvme_keyring 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2266520 ']' 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2266520 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2266520 ']' 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2266520 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266520 00:23:48.645 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266520' 00:23:48.646 killing process with pid 2266520 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2266520 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2266520 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.646 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.904 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.904 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.904 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.904 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.904 15:32:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.809 00:23:50.809 real 0m18.327s 00:23:50.809 user 0m22.605s 00:23:50.809 sys 0m6.016s 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.809 ************************************ 00:23:50.809 END TEST nvmf_host_discovery 00:23:50.809 ************************************ 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.809 ************************************ 00:23:50.809 START TEST nvmf_host_multipath_status 00:23:50.809 ************************************ 00:23:50.809 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:51.069 * Looking for test storage... 00:23:51.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:51.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.069 --rc genhtml_branch_coverage=1 00:23:51.069 --rc genhtml_function_coverage=1 00:23:51.069 --rc genhtml_legend=1 00:23:51.069 --rc geninfo_all_blocks=1 00:23:51.069 --rc geninfo_unexecuted_blocks=1 00:23:51.069 00:23:51.069 ' 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:51.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.069 --rc genhtml_branch_coverage=1 00:23:51.069 --rc genhtml_function_coverage=1 00:23:51.069 --rc genhtml_legend=1 00:23:51.069 --rc geninfo_all_blocks=1 00:23:51.069 --rc geninfo_unexecuted_blocks=1 00:23:51.069 00:23:51.069 ' 00:23:51.069 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:51.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.069 --rc genhtml_branch_coverage=1 00:23:51.069 --rc genhtml_function_coverage=1 00:23:51.069 --rc genhtml_legend=1 00:23:51.069 --rc geninfo_all_blocks=1 00:23:51.069 --rc geninfo_unexecuted_blocks=1 00:23:51.069 00:23:51.069 ' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.070 --rc genhtml_branch_coverage=1 00:23:51.070 --rc genhtml_function_coverage=1 00:23:51.070 --rc genhtml_legend=1 00:23:51.070 --rc geninfo_all_blocks=1 00:23:51.070 --rc geninfo_unexecuted_blocks=1 00:23:51.070 00:23:51.070 ' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.070 15:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.637 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.637 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.637 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.638 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.638 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:23:57.638 00:23:57.638 --- 10.0.0.2 ping statistics --- 00:23:57.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.638 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:23:57.638 00:23:57.638 --- 10.0.0.1 ping statistics --- 00:23:57.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.638 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2271891 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2271891 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2271891 ']' 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.638 15:33:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.638 [2024-11-20 15:33:00.866481] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:23:57.638 [2024-11-20 15:33:00.866527] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.638 [2024-11-20 15:33:00.948077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:57.638 [2024-11-20 15:33:00.992389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.638 [2024-11-20 15:33:00.992422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.638 [2024-11-20 15:33:00.992430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.638 [2024-11-20 15:33:00.992437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.638 [2024-11-20 15:33:00.992442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.638 [2024-11-20 15:33:00.993667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.638 [2024-11-20 15:33:00.993668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.638 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.639 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2271891 00:23:57.639 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.639 [2024-11-20 15:33:01.315124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.639 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.897 Malloc0 00:23:57.897 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:57.897 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.156 15:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.414 [2024-11-20 15:33:02.139490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.414 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.673 [2024-11-20 15:33:02.335931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2272220 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2272220 /var/tmp/bdevperf.sock 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2272220 ']' 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.673 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.932 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.932 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:58.932 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:58.932 15:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:59.500 Nvme0n1 00:23:59.500 15:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:59.758 Nvme0n1 00:23:59.758 15:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:59.758 15:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:02.292 15:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:02.292 15:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:02.292 15:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.292 15:33:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:03.227 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:03.227 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.227 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.228 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.486 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.486 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.486 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.486 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.745 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.745 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.745 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.745 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.004 15:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.263 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.263 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.263 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.263 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.522 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.522 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:04.522 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.781 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.039 15:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:05.975 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:05.975 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.975 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.975 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.233 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.233 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.233 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.233 15:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.233 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.233 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.233 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.233 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.491 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.491 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.491 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.491 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.749 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.749 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.749 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.749 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.007 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.007 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.008 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.008 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.266 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.266 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:07.266 15:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.266 15:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.525 15:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:08.460 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:08.460 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.460 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.460 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.719 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.719 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.719 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.719 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.978 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.978 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.978 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.978 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.237 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.237 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.237 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.237 15:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.495 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.495 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.495 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.495 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:09.754 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.012 15:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.271 15:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:11.204 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:11.204 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.204 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.204 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.462 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.462 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.463 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.463 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.721 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.721 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.721 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.721 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.980 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.980 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.980 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.980 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.238 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.238 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:12.238 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.238 15:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:12.496 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:12.754 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:13.012 15:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:13.948 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:13.948 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:13.948 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.948 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.206 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.206 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:14.206 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.206 15:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.465 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.465 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.465 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.465 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.723 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.982 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.982 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:14.982 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.982 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.240 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.240 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:15.240 15:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:15.498 15:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.498 15:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.037 15:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.295 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.295 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.295 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.295 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.553 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.811 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.811 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:18.068 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:18.068 15:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:18.326 15:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.585 15:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:19.519 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:19.519 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.519 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.519 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.778 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.778 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.778 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.778 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.036 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.036 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.036 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.036 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.295 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.295 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.295 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.295 15:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.295 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.295 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.295 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.295 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.554 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.554 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.554 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.554 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.812 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.812 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:20.812 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.071 15:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:21.329 15:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:22.264 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:22.264 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:22.264 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.264 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.523 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.523 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.523 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.523 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.781 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.781 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.781 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.781 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.041 15:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.300 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.300 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.300 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.300 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.559 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.559 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:23.559 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.817 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:24.076 15:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:25.014 15:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:25.014 15:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.014 15:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.014 15:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.273 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.273 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:25.273 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.273 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.530 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.788 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.788 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.788 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.788 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.046 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.046 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:26.046 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.046 15:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.305 15:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.305 15:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:26.305 15:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:26.564 15:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:26.822 15:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:27.759 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:27.759 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.759 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.759 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.017 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.017 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:28.017 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.017 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.276 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.276 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.276 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.276 15:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.276 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.276 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.276 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.276 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.536 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.536 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.536 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.536 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.796 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.796 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:28.796 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.796 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2272220 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2272220 ']' 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2272220 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272220 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272220' 00:24:29.055 killing process with pid 2272220 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2272220 00:24:29.055 15:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2272220 00:24:29.055 { 00:24:29.055 "results": [ 00:24:29.055 { 00:24:29.055 "job": "Nvme0n1", 00:24:29.055 "core_mask": "0x4", 00:24:29.055 "workload": "verify", 00:24:29.055 "status": "terminated", 00:24:29.055 "verify_range": { 00:24:29.055 "start": 0, 00:24:29.055 "length": 16384 00:24:29.055 }, 00:24:29.055 "queue_depth": 128, 00:24:29.055 "io_size": 4096, 00:24:29.055 "runtime": 29.118604, 00:24:29.055 "iops": 10444.66280045568, 00:24:29.055 "mibps": 40.79946406428, 00:24:29.055 "io_failed": 0, 00:24:29.055 "io_timeout": 0, 00:24:29.055 "avg_latency_us": 12234.768876259062, 00:24:29.055 "min_latency_us": 140.68869565217392, 00:24:29.055 "max_latency_us": 3078254.4139130437 00:24:29.055 } 00:24:29.055 ], 00:24:29.055 "core_count": 1 00:24:29.055 } 00:24:29.339 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2272220 00:24:29.339 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.339 [2024-11-20 15:33:02.397287] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:29.339 [2024-11-20 15:33:02.397340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272220 ] 00:24:29.339 [2024-11-20 15:33:02.475918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.339 [2024-11-20 15:33:02.517037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.339 Running I/O for 90 seconds... 00:24:29.339 11308.00 IOPS, 44.17 MiB/s [2024-11-20T14:33:33.247Z] 11267.50 IOPS, 44.01 MiB/s [2024-11-20T14:33:33.247Z] 11261.33 IOPS, 43.99 MiB/s [2024-11-20T14:33:33.247Z] 11288.75 IOPS, 44.10 MiB/s [2024-11-20T14:33:33.247Z] 11274.00 IOPS, 44.04 MiB/s [2024-11-20T14:33:33.247Z] 11280.17 IOPS, 44.06 MiB/s [2024-11-20T14:33:33.247Z] 11279.86 IOPS, 44.06 MiB/s [2024-11-20T14:33:33.247Z] 11290.38 IOPS, 44.10 MiB/s [2024-11-20T14:33:33.247Z] 11284.44 IOPS, 44.08 MiB/s [2024-11-20T14:33:33.247Z] 11283.50 IOPS, 44.08 MiB/s [2024-11-20T14:33:33.247Z] 11281.82 IOPS, 44.07 MiB/s [2024-11-20T14:33:33.247Z] 11277.75 IOPS, 44.05 MiB/s [2024-11-20T14:33:33.247Z] [2024-11-20 15:33:16.541838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.541876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.541897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.541921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.541935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.541942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.541962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.541969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.541982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.541988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.339 [2024-11-20 15:33:16.542143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.339 [2024-11-20 15:33:16.542149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.340 [2024-11-20 15:33:16.542544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.340 [2024-11-20 15:33:16.542943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.542986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.542993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.340 [2024-11-20 15:33:16.543296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.340 [2024-11-20 15:33:16.543303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.543707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.543714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.341 [2024-11-20 15:33:16.544526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.341 [2024-11-20 15:33:16.544536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.544975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.544987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.342 [2024-11-20 15:33:16.545286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.342 [2024-11-20 15:33:16.545576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.342 [2024-11-20 15:33:16.545586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.545983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.545995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.343 [2024-11-20 15:33:16.546215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.343 [2024-11-20 15:33:16.546237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.343 [2024-11-20 15:33:16.546561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.343 [2024-11-20 15:33:16.546568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.546843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.546855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.556896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.556918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.556928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.556945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.556964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.556981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.556990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.344 [2024-11-20 15:33:16.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.344 [2024-11-20 15:33:16.557903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.557912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.557929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.557938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.557961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.557971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.557988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.345 [2024-11-20 15:33:16.558533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.345 [2024-11-20 15:33:16.558917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.345 [2024-11-20 15:33:16.558925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.558942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.558957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.558974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.346 [2024-11-20 15:33:16.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.346 [2024-11-20 15:33:16.559950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.346 [2024-11-20 15:33:16.559960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.559979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.559988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.560222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.560239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.347 [2024-11-20 15:33:16.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.347 [2024-11-20 15:33:16.561311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.561977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.561989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.562006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.562015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.347 [2024-11-20 15:33:16.562032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.347 [2024-11-20 15:33:16.562041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.562975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.562984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.348 [2024-11-20 15:33:16.563326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.348 [2024-11-20 15:33:16.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.563355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.563744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.563753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.349 [2024-11-20 15:33:16.564714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.349 [2024-11-20 15:33:16.564865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.349 [2024-11-20 15:33:16.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.564892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.564901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.564917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.564944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.564960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.564977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.565567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.350 [2024-11-20 15:33:16.565593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.350 [2024-11-20 15:33:16.565619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.565636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.350 [2024-11-20 15:33:16.570709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.350 [2024-11-20 15:33:16.570719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.570977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.570987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.571004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.571013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.571031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.571040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.571908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.571927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.571952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.571963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.571991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.351 [2024-11-20 15:33:16.572612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.351 [2024-11-20 15:33:16.572631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.572973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.572990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.572999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.573982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.352 [2024-11-20 15:33:16.573992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.574009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.574018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.574035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.574044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.574064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.574073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.574091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.352 [2024-11-20 15:33:16.574100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.352 [2024-11-20 15:33:16.574117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.353 [2024-11-20 15:33:16.574414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.574971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.574991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.353 [2024-11-20 15:33:16.575212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.353 [2024-11-20 15:33:16.575223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.354 [2024-11-20 15:33:16.575408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.354 [2024-11-20 15:33:16.575440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.575967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.575979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.576000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.576011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.576031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.576043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.576064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.576095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.576106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.576127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.354 [2024-11-20 15:33:16.577404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.354 [2024-11-20 15:33:16.577425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.577975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.577995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.355 [2024-11-20 15:33:16.578384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.355 [2024-11-20 15:33:16.578624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.355 [2024-11-20 15:33:16.578635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.578655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.578666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.578686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.578697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.578717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.578728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.578749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.578759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.578783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.578794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.579589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.579975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.579995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.580006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.580037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.580068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.356 [2024-11-20 15:33:16.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.356 [2024-11-20 15:33:16.580537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.356 [2024-11-20 15:33:16.580557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.580972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.580983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.357 [2024-11-20 15:33:16.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.357 [2024-11-20 15:33:16.581172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.357 [2024-11-20 15:33:16.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.357 [2024-11-20 15:33:16.581643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.581822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.581833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.582829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.582851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.582874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.582887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.582907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.582919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.582941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.582959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.582980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.582991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.358 [2024-11-20 15:33:16.583842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.358 [2024-11-20 15:33:16.583861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.583872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.583892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.583903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.583924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.583961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.583972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.583992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.584004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.584034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.584065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.584097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.584128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.584495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.359 [2024-11-20 15:33:16.585083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.359 [2024-11-20 15:33:16.585372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.359 [2024-11-20 15:33:16.585385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.360 [2024-11-20 15:33:16.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.360 [2024-11-20 15:33:16.585414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.585984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.585997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.360 [2024-11-20 15:33:16.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.360 [2024-11-20 15:33:16.586128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.360 [2024-11-20 15:33:16.586192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.360 [2024-11-20 15:33:16.586204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.586538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.586545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.361 [2024-11-20 15:33:16.587551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.361 [2024-11-20 15:33:16.587563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.587989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.587996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.588016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.588037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.362 [2024-11-20 15:33:16.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.362 [2024-11-20 15:33:16.588795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.362 [2024-11-20 15:33:16.588802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.588843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.588984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.588991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.363 [2024-11-20 15:33:16.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.363 [2024-11-20 15:33:16.589604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.363 [2024-11-20 15:33:16.589612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.364 [2024-11-20 15:33:16.589859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.364 [2024-11-20 15:33:16.589880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.589979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.589986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.364 [2024-11-20 15:33:16.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.364 [2024-11-20 15:33:16.590928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.590935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.590954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.590962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.590975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.590983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.590995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.591982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.591997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.365 [2024-11-20 15:33:16.592130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.365 [2024-11-20 15:33:16.592150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.365 [2024-11-20 15:33:16.592170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.365 [2024-11-20 15:33:16.592183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.366 [2024-11-20 15:33:16.592800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.366 [2024-11-20 15:33:16.592894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.366 [2024-11-20 15:33:16.592904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.592918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.592925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.367 [2024-11-20 15:33:16.593861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.367 [2024-11-20 15:33:16.593881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.593983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.593990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.594003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.594011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.594024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.594031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.594044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.594051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.594065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.367 [2024-11-20 15:33:16.594072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.367 [2024-11-20 15:33:16.594085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.594982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.594995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.368 [2024-11-20 15:33:16.595313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.368 [2024-11-20 15:33:16.595326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.595593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.595879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.595886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.369 [2024-11-20 15:33:16.596360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.369 [2024-11-20 15:33:16.596528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.369 [2024-11-20 15:33:16.596535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.370 [2024-11-20 15:33:16.596676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.596804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.596811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.370 [2024-11-20 15:33:16.597378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.370 [2024-11-20 15:33:16.597385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.371 [2024-11-20 15:33:16.597502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.371 [2024-11-20 15:33:16.597521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.597970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.597977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.371 [2024-11-20 15:33:16.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.371 [2024-11-20 15:33:16.598576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.598983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.598995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.372 [2024-11-20 15:33:16.599434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.372 [2024-11-20 15:33:16.599589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.372 [2024-11-20 15:33:16.599601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.599742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.599985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.599998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.600005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.600025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.600044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.373 [2024-11-20 15:33:16.600066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.373 [2024-11-20 15:33:16.600655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.373 [2024-11-20 15:33:16.600668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.600989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.600996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.374 [2024-11-20 15:33:16.601017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.374 [2024-11-20 15:33:16.601036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.374 [2024-11-20 15:33:16.601781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.374 [2024-11-20 15:33:16.601795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.601988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.601994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.375 [2024-11-20 15:33:16.602738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.375 [2024-11-20 15:33:16.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.602757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.602770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.602777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.602930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.602939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.602957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.602965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.602977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.602983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.602996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.603003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.603022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.603041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.376 [2024-11-20 15:33:16.603350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.376 [2024-11-20 15:33:16.603636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.376 [2024-11-20 15:33:16.603648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.377 [2024-11-20 15:33:16.603655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.603983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.603997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.377 [2024-11-20 15:33:16.604497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.377 [2024-11-20 15:33:16.604518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.377 [2024-11-20 15:33:16.604684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.377 [2024-11-20 15:33:16.604701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.604985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.604992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:29.378 [2024-11-20 15:33:16.605712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.378 [2024-11-20 15:33:16.605719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.605981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.605999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.606006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.606025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.606032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.606050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.606057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:16.606076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:16.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:29.379 11140.46 IOPS, 43.52 MiB/s [2024-11-20T14:33:33.287Z] 10344.71 IOPS, 40.41 MiB/s [2024-11-20T14:33:33.287Z] 9655.07 IOPS, 37.72 MiB/s [2024-11-20T14:33:33.287Z] 9113.56 IOPS, 35.60 MiB/s [2024-11-20T14:33:33.287Z] 9227.71 IOPS, 36.05 MiB/s [2024-11-20T14:33:33.287Z] 9344.44 IOPS, 36.50 MiB/s [2024-11-20T14:33:33.287Z] 9520.95 IOPS, 37.19 MiB/s [2024-11-20T14:33:33.287Z] 9717.90 IOPS, 37.96 MiB/s [2024-11-20T14:33:33.287Z] 9894.95 IOPS, 38.65 MiB/s [2024-11-20T14:33:33.287Z] 9958.64 IOPS, 38.90 MiB/s [2024-11-20T14:33:33.287Z] 10015.48 IOPS, 39.12 MiB/s [2024-11-20T14:33:33.287Z] 10060.21 IOPS, 39.30 MiB/s [2024-11-20T14:33:33.287Z] 10188.16 IOPS, 39.80 MiB/s [2024-11-20T14:33:33.287Z] 10303.81 IOPS, 40.25 MiB/s [2024-11-20T14:33:33.287Z] [2024-11-20 15:33:30.483464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.379 [2024-11-20 15:33:30.483787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.379 [2024-11-20 15:33:30.483866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:29.379 [2024-11-20 15:33:30.483879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.483898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.483917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.483936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.483962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.483981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.483988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.380 [2024-11-20 15:33:30.484221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.380 [2024-11-20 15:33:30.484414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.380 [2024-11-20 15:33:30.484433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.380 [2024-11-20 15:33:30.484451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.380 [2024-11-20 15:33:30.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.484485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.484492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.485881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.485903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.485920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.485928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.485940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.485953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.485966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.485974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.485986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.485993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.486005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.486012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.486024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.380 [2024-11-20 15:33:30.486031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:29.380 [2024-11-20 15:33:30.486044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.381 [2024-11-20 15:33:30.486050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.381 [2024-11-20 15:33:30.486070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.381 [2024-11-20 15:33:30.486090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:29.381 [2024-11-20 15:33:30.486473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.381 [2024-11-20 15:33:30.486480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:29.381 10387.37 IOPS, 40.58 MiB/s [2024-11-20T14:33:33.289Z] 10418.89 IOPS, 40.70 MiB/s [2024-11-20T14:33:33.289Z] 10442.76 IOPS, 40.79 MiB/s [2024-11-20T14:33:33.289Z] Received shutdown signal, test time was about 29.119306 seconds 00:24:29.381 00:24:29.381 Latency(us) 00:24:29.381 [2024-11-20T14:33:33.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.381 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:29.381 Verification LBA range: start 0x0 length 0x4000 00:24:29.381 Nvme0n1 : 29.12 10444.66 40.80 0.00 0.00 12234.77 140.69 3078254.41 00:24:29.381 [2024-11-20T14:33:33.289Z] =================================================================================================================== 00:24:29.381 [2024-11-20T14:33:33.289Z] Total : 10444.66 40.80 0.00 0.00 12234.77 140.69 3078254.41 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.381 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.640 rmmod nvme_tcp 00:24:29.640 rmmod nvme_fabrics 00:24:29.640 rmmod nvme_keyring 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2271891 ']' 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2271891 ']' 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2271891' 00:24:29.640 killing process with pid 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2271891 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.640 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.641 15:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.176 15:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.176 00:24:32.176 real 0m40.885s 00:24:32.176 user 1m51.134s 00:24:32.176 sys 0m11.626s 00:24:32.176 15:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.176 15:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:32.176 ************************************ 00:24:32.176 END TEST nvmf_host_multipath_status 00:24:32.176 ************************************ 00:24:32.176 15:33:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:32.176 15:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.177 ************************************ 00:24:32.177 START TEST nvmf_discovery_remove_ifc 00:24:32.177 ************************************ 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:32.177 * Looking for test storage... 00:24:32.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.177 --rc genhtml_branch_coverage=1 00:24:32.177 --rc genhtml_function_coverage=1 00:24:32.177 --rc genhtml_legend=1 00:24:32.177 --rc geninfo_all_blocks=1 00:24:32.177 --rc geninfo_unexecuted_blocks=1 00:24:32.177 00:24:32.177 ' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.177 --rc genhtml_branch_coverage=1 00:24:32.177 --rc genhtml_function_coverage=1 00:24:32.177 --rc genhtml_legend=1 00:24:32.177 --rc geninfo_all_blocks=1 00:24:32.177 --rc geninfo_unexecuted_blocks=1 00:24:32.177 00:24:32.177 ' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.177 --rc genhtml_branch_coverage=1 00:24:32.177 --rc genhtml_function_coverage=1 00:24:32.177 --rc genhtml_legend=1 00:24:32.177 --rc geninfo_all_blocks=1 00:24:32.177 --rc geninfo_unexecuted_blocks=1 00:24:32.177 00:24:32.177 ' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.177 --rc genhtml_branch_coverage=1 00:24:32.177 --rc genhtml_function_coverage=1 00:24:32.177 --rc genhtml_legend=1 00:24:32.177 --rc geninfo_all_blocks=1 00:24:32.177 --rc geninfo_unexecuted_blocks=1 00:24:32.177 00:24:32.177 ' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.177 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.178 15:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:38.741 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.742 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.742 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.742 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:24:38.743 00:24:38.743 --- 10.0.0.2 ping statistics --- 00:24:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.743 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:38.743 00:24:38.743 --- 10.0.0.1 ping statistics --- 00:24:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.743 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2281366 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2281366 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2281366 ']' 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.743 15:33:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 [2024-11-20 15:33:41.852531] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:38.743 [2024-11-20 15:33:41.852584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.743 [2024-11-20 15:33:41.931881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.743 [2024-11-20 15:33:41.973237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.743 [2024-11-20 15:33:41.973276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.743 [2024-11-20 15:33:41.973284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.743 [2024-11-20 15:33:41.973290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.743 [2024-11-20 15:33:41.973296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.743 [2024-11-20 15:33:41.973868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.743 [2024-11-20 15:33:42.117089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.743 [2024-11-20 15:33:42.125235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:38.743 null0 00:24:38.743 [2024-11-20 15:33:42.157230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2281390 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2281390 /tmp/host.sock 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2281390 ']' 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:38.743 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:38.744 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.744 [2024-11-20 15:33:42.224737] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:24:38.744 [2024-11-20 15:33:42.224777] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281390 ] 00:24:38.744 [2024-11-20 15:33:42.295836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.744 [2024-11-20 15:33:42.336811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.744 15:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.679 [2024-11-20 15:33:43.524499] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:39.679 [2024-11-20 15:33:43.524518] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:39.679 [2024-11-20 15:33:43.524533] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.937 [2024-11-20 15:33:43.611794] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:39.937 [2024-11-20 15:33:43.673371] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:39.937 [2024-11-20 15:33:43.674145] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24c79f0:1 started. 00:24:39.937 [2024-11-20 15:33:43.675412] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:39.937 [2024-11-20 15:33:43.675453] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:39.937 [2024-11-20 15:33:43.675471] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:39.937 [2024-11-20 15:33:43.675484] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:39.937 [2024-11-20 15:33:43.675501] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.937 [2024-11-20 15:33:43.682347] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24c79f0 was disconnected and freed. delete nvme_qpair. 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.937 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.938 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.196 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.196 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.196 15:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.132 15:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.068 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.327 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.327 15:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.263 15:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.263 15:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.263 15:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.263 15:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.199 15:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.575 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.575 [2024-11-20 15:33:49.117096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:45.575 [2024-11-20 15:33:49.117133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.575 [2024-11-20 15:33:49.117161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.575 [2024-11-20 15:33:49.117171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.575 [2024-11-20 15:33:49.117178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.575 [2024-11-20 15:33:49.117186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.575 [2024-11-20 15:33:49.117193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.575 [2024-11-20 15:33:49.117200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.576 [2024-11-20 15:33:49.117207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.576 [2024-11-20 15:33:49.117214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.576 [2024-11-20 15:33:49.117221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.576 [2024-11-20 15:33:49.117228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4220 is same with the state(6) to be set 00:24:45.576 [2024-11-20 15:33:49.127119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4220 (9): Bad file descriptor 00:24:45.576 [2024-11-20 15:33:49.137154] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:45.576 [2024-11-20 15:33:49.137165] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:45.576 [2024-11-20 15:33:49.137169] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:45.576 [2024-11-20 15:33:49.137174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:45.576 [2024-11-20 15:33:49.137198] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:45.576 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.576 15:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.509 [2024-11-20 15:33:50.192989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:46.509 [2024-11-20 15:33:50.193074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a4220 with addr=10.0.0.2, port=4420 00:24:46.509 [2024-11-20 15:33:50.193108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4220 is same with the state(6) to be set 00:24:46.509 [2024-11-20 15:33:50.193165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4220 (9): Bad file descriptor 00:24:46.509 [2024-11-20 15:33:50.194126] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:46.509 [2024-11-20 15:33:50.194188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.509 [2024-11-20 15:33:50.194213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.509 [2024-11-20 15:33:50.194236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.509 [2024-11-20 15:33:50.194257] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.509 [2024-11-20 15:33:50.194274] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.509 [2024-11-20 15:33:50.194287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.509 [2024-11-20 15:33:50.194309] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.509 [2024-11-20 15:33:50.194323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.509 15:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.445 [2024-11-20 15:33:51.196844] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:47.445 [2024-11-20 15:33:51.196864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:47.445 [2024-11-20 15:33:51.196876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:47.445 [2024-11-20 15:33:51.196882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:47.445 [2024-11-20 15:33:51.196890] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:47.445 [2024-11-20 15:33:51.196917] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:47.445 [2024-11-20 15:33:51.196922] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:47.445 [2024-11-20 15:33:51.196927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:47.445 [2024-11-20 15:33:51.196954] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:47.445 [2024-11-20 15:33:51.196976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.445 [2024-11-20 15:33:51.196986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 15:33:51.196995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.445 [2024-11-20 15:33:51.197002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 15:33:51.197009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.445 [2024-11-20 15:33:51.197016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 15:33:51.197023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.445 [2024-11-20 15:33:51.197030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 15:33:51.197038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.445 [2024-11-20 15:33:51.197045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 15:33:51.197051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:47.445 [2024-11-20 15:33:51.197506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493900 (9): Bad file descriptor 00:24:47.445 [2024-11-20 15:33:51.198517] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:47.445 [2024-11-20 15:33:51.198528] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.445 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.704 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.704 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:47.704 15:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:48.639 15:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.575 [2024-11-20 15:33:53.249412] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:49.575 [2024-11-20 15:33:53.249429] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:49.575 [2024-11-20 15:33:53.249441] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:49.575 [2024-11-20 15:33:53.375836] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.575 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.833 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:49.833 15:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.833 [2024-11-20 15:33:53.591981] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:49.833 [2024-11-20 15:33:53.592610] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2498760:1 started. 00:24:49.833 [2024-11-20 15:33:53.593665] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:49.833 [2024-11-20 15:33:53.593695] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:49.833 [2024-11-20 15:33:53.593712] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:49.833 [2024-11-20 15:33:53.593725] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:49.833 [2024-11-20 15:33:53.593732] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.833 [2024-11-20 15:33:53.598011] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2498760 was disconnected and freed. delete nvme_qpair. 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2281390 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2281390 ']' 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2281390 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281390 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281390' 00:24:50.768 killing process with pid 2281390 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2281390 00:24:50.768 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2281390 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.027 rmmod nvme_tcp 00:24:51.027 rmmod nvme_fabrics 00:24:51.027 rmmod nvme_keyring 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2281366 ']' 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2281366 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2281366 ']' 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2281366 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281366 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281366' 00:24:51.027 killing process with pid 2281366 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2281366 00:24:51.027 15:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2281366 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.286 15:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.191 15:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.450 00:24:53.450 real 0m21.448s 00:24:53.450 user 0m26.619s 00:24:53.450 sys 0m5.926s 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.450 ************************************ 00:24:53.450 END TEST nvmf_discovery_remove_ifc 00:24:53.450 ************************************ 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.450 ************************************ 00:24:53.450 START TEST nvmf_identify_kernel_target 00:24:53.450 ************************************ 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:53.450 * Looking for test storage... 00:24:53.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.450 --rc genhtml_branch_coverage=1 00:24:53.450 --rc genhtml_function_coverage=1 00:24:53.450 --rc genhtml_legend=1 00:24:53.450 --rc geninfo_all_blocks=1 00:24:53.450 --rc geninfo_unexecuted_blocks=1 00:24:53.450 00:24:53.450 ' 00:24:53.450 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.450 --rc genhtml_branch_coverage=1 00:24:53.450 --rc genhtml_function_coverage=1 00:24:53.450 --rc genhtml_legend=1 00:24:53.451 --rc geninfo_all_blocks=1 00:24:53.451 --rc geninfo_unexecuted_blocks=1 00:24:53.451 00:24:53.451 ' 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.451 --rc genhtml_branch_coverage=1 00:24:53.451 --rc genhtml_function_coverage=1 00:24:53.451 --rc genhtml_legend=1 00:24:53.451 --rc geninfo_all_blocks=1 00:24:53.451 --rc geninfo_unexecuted_blocks=1 00:24:53.451 00:24:53.451 ' 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.451 --rc genhtml_branch_coverage=1 00:24:53.451 --rc genhtml_function_coverage=1 00:24:53.451 --rc genhtml_legend=1 00:24:53.451 --rc geninfo_all_blocks=1 00:24:53.451 --rc geninfo_unexecuted_blocks=1 00:24:53.451 00:24:53.451 ' 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.451 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.710 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.710 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.710 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.710 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.710 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.711 15:33:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:00.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:00.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:00.279 Found net devices under 0000:86:00.0: cvl_0_0 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:00.279 Found net devices under 0000:86:00.1: cvl_0_1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.279 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:25:00.279 00:25:00.279 --- 10.0.0.2 ping statistics --- 00:25:00.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.279 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:25:00.280 00:25:00.280 --- 10.0.0.1 ping statistics --- 00:25:00.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.280 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:00.280 15:34:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:02.183 Waiting for block devices as requested 00:25:02.442 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:02.442 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.442 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.701 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.701 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.701 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.959 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.959 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.959 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.959 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:03.216 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:03.216 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:03.216 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:03.475 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.475 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.475 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.475 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.737 No valid GPT data, bailing 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:03.737 00:25:03.737 Discovery Log Number of Records 2, Generation counter 2 00:25:03.737 =====Discovery Log Entry 0====== 00:25:03.737 trtype: tcp 00:25:03.737 adrfam: ipv4 00:25:03.737 subtype: current discovery subsystem 00:25:03.737 treq: not specified, sq flow control disable supported 00:25:03.737 portid: 1 00:25:03.737 trsvcid: 4420 00:25:03.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.737 traddr: 10.0.0.1 00:25:03.737 eflags: none 00:25:03.737 sectype: none 00:25:03.737 =====Discovery Log Entry 1====== 00:25:03.737 trtype: tcp 00:25:03.737 adrfam: ipv4 00:25:03.737 subtype: nvme subsystem 00:25:03.737 treq: not specified, sq flow control disable supported 00:25:03.737 portid: 1 00:25:03.737 trsvcid: 4420 00:25:03.737 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:03.737 traddr: 10.0.0.1 00:25:03.737 eflags: none 00:25:03.737 sectype: none 00:25:03.737 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:03.737 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:04.111 ===================================================== 00:25:04.111 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:04.111 ===================================================== 00:25:04.111 Controller Capabilities/Features 00:25:04.111 ================================ 00:25:04.111 Vendor ID: 0000 00:25:04.111 Subsystem Vendor ID: 0000 00:25:04.111 Serial Number: 3adab604ada881cb5e20 00:25:04.111 Model Number: Linux 00:25:04.111 Firmware Version: 6.8.9-20 00:25:04.111 Recommended Arb Burst: 0 00:25:04.111 IEEE OUI Identifier: 00 00 00 00:25:04.111 Multi-path I/O 00:25:04.111 May have multiple subsystem ports: No 00:25:04.111 May have multiple controllers: No 00:25:04.111 Associated with SR-IOV VF: No 00:25:04.111 Max Data Transfer Size: Unlimited 00:25:04.111 Max Number of Namespaces: 0 00:25:04.111 Max Number of I/O Queues: 1024 00:25:04.111 NVMe Specification Version (VS): 1.3 00:25:04.111 NVMe Specification Version (Identify): 1.3 00:25:04.111 Maximum Queue Entries: 1024 00:25:04.111 Contiguous Queues Required: No 00:25:04.111 Arbitration Mechanisms Supported 00:25:04.111 Weighted Round Robin: Not Supported 00:25:04.111 Vendor Specific: Not Supported 00:25:04.111 Reset Timeout: 7500 ms 00:25:04.111 Doorbell Stride: 4 bytes 00:25:04.111 NVM Subsystem Reset: Not Supported 00:25:04.111 Command Sets Supported 00:25:04.111 NVM Command Set: Supported 00:25:04.111 Boot Partition: Not Supported 00:25:04.111 Memory Page Size Minimum: 4096 bytes 00:25:04.111 Memory Page Size Maximum: 4096 bytes 00:25:04.111 Persistent Memory Region: Not Supported 00:25:04.111 Optional Asynchronous Events Supported 00:25:04.111 Namespace Attribute Notices: Not Supported 00:25:04.111 Firmware Activation Notices: Not Supported 00:25:04.111 ANA Change Notices: Not Supported 00:25:04.111 PLE Aggregate Log Change Notices: Not Supported 00:25:04.111 LBA Status Info Alert Notices: Not Supported 00:25:04.111 EGE Aggregate Log Change Notices: Not Supported 00:25:04.111 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.111 Zone Descriptor Change Notices: Not Supported 00:25:04.111 Discovery Log Change Notices: Supported 00:25:04.111 Controller Attributes 00:25:04.111 128-bit Host Identifier: Not Supported 00:25:04.111 Non-Operational Permissive Mode: Not Supported 00:25:04.111 NVM Sets: Not Supported 00:25:04.111 Read Recovery Levels: Not Supported 00:25:04.111 Endurance Groups: Not Supported 00:25:04.111 Predictable Latency Mode: Not Supported 00:25:04.111 Traffic Based Keep ALive: Not Supported 00:25:04.111 Namespace Granularity: Not Supported 00:25:04.111 SQ Associations: Not Supported 00:25:04.111 UUID List: Not Supported 00:25:04.111 Multi-Domain Subsystem: Not Supported 00:25:04.111 Fixed Capacity Management: Not Supported 00:25:04.111 Variable Capacity Management: Not Supported 00:25:04.111 Delete Endurance Group: Not Supported 00:25:04.111 Delete NVM Set: Not Supported 00:25:04.111 Extended LBA Formats Supported: Not Supported 00:25:04.111 Flexible Data Placement Supported: Not Supported 00:25:04.111 00:25:04.111 Controller Memory Buffer Support 00:25:04.111 ================================ 00:25:04.111 Supported: No 00:25:04.111 00:25:04.111 Persistent Memory Region Support 00:25:04.111 ================================ 00:25:04.111 Supported: No 00:25:04.111 00:25:04.111 Admin Command Set Attributes 00:25:04.111 ============================ 00:25:04.111 Security Send/Receive: Not Supported 00:25:04.111 Format NVM: Not Supported 00:25:04.111 Firmware Activate/Download: Not Supported 00:25:04.111 Namespace Management: Not Supported 00:25:04.111 Device Self-Test: Not Supported 00:25:04.111 Directives: Not Supported 00:25:04.111 NVMe-MI: Not Supported 00:25:04.111 Virtualization Management: Not Supported 00:25:04.111 Doorbell Buffer Config: Not Supported 00:25:04.111 Get LBA Status Capability: Not Supported 00:25:04.111 Command & Feature Lockdown Capability: Not Supported 00:25:04.112 Abort Command Limit: 1 00:25:04.112 Async Event Request Limit: 1 00:25:04.112 Number of Firmware Slots: N/A 00:25:04.112 Firmware Slot 1 Read-Only: N/A 00:25:04.112 Firmware Activation Without Reset: N/A 00:25:04.112 Multiple Update Detection Support: N/A 00:25:04.112 Firmware Update Granularity: No Information Provided 00:25:04.112 Per-Namespace SMART Log: No 00:25:04.112 Asymmetric Namespace Access Log Page: Not Supported 00:25:04.112 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:04.112 Command Effects Log Page: Not Supported 00:25:04.112 Get Log Page Extended Data: Supported 00:25:04.112 Telemetry Log Pages: Not Supported 00:25:04.112 Persistent Event Log Pages: Not Supported 00:25:04.112 Supported Log Pages Log Page: May Support 00:25:04.112 Commands Supported & Effects Log Page: Not Supported 00:25:04.112 Feature Identifiers & Effects Log Page:May Support 00:25:04.112 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.112 Data Area 4 for Telemetry Log: Not Supported 00:25:04.112 Error Log Page Entries Supported: 1 00:25:04.112 Keep Alive: Not Supported 00:25:04.112 00:25:04.112 NVM Command Set Attributes 00:25:04.112 ========================== 00:25:04.112 Submission Queue Entry Size 00:25:04.112 Max: 1 00:25:04.112 Min: 1 00:25:04.112 Completion Queue Entry Size 00:25:04.112 Max: 1 00:25:04.112 Min: 1 00:25:04.112 Number of Namespaces: 0 00:25:04.112 Compare Command: Not Supported 00:25:04.112 Write Uncorrectable Command: Not Supported 00:25:04.112 Dataset Management Command: Not Supported 00:25:04.112 Write Zeroes Command: Not Supported 00:25:04.112 Set Features Save Field: Not Supported 00:25:04.112 Reservations: Not Supported 00:25:04.112 Timestamp: Not Supported 00:25:04.112 Copy: Not Supported 00:25:04.112 Volatile Write Cache: Not Present 00:25:04.112 Atomic Write Unit (Normal): 1 00:25:04.112 Atomic Write Unit (PFail): 1 00:25:04.112 Atomic Compare & Write Unit: 1 00:25:04.112 Fused Compare & Write: Not Supported 00:25:04.112 Scatter-Gather List 00:25:04.112 SGL Command Set: Supported 00:25:04.112 SGL Keyed: Not Supported 00:25:04.112 SGL Bit Bucket Descriptor: Not Supported 00:25:04.112 SGL Metadata Pointer: Not Supported 00:25:04.112 Oversized SGL: Not Supported 00:25:04.112 SGL Metadata Address: Not Supported 00:25:04.112 SGL Offset: Supported 00:25:04.112 Transport SGL Data Block: Not Supported 00:25:04.112 Replay Protected Memory Block: Not Supported 00:25:04.112 00:25:04.112 Firmware Slot Information 00:25:04.112 ========================= 00:25:04.112 Active slot: 0 00:25:04.112 00:25:04.112 00:25:04.112 Error Log 00:25:04.112 ========= 00:25:04.112 00:25:04.112 Active Namespaces 00:25:04.112 ================= 00:25:04.112 Discovery Log Page 00:25:04.112 ================== 00:25:04.112 Generation Counter: 2 00:25:04.112 Number of Records: 2 00:25:04.112 Record Format: 0 00:25:04.112 00:25:04.112 Discovery Log Entry 0 00:25:04.112 ---------------------- 00:25:04.112 Transport Type: 3 (TCP) 00:25:04.112 Address Family: 1 (IPv4) 00:25:04.112 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:04.112 Entry Flags: 00:25:04.112 Duplicate Returned Information: 0 00:25:04.112 Explicit Persistent Connection Support for Discovery: 0 00:25:04.112 Transport Requirements: 00:25:04.112 Secure Channel: Not Specified 00:25:04.112 Port ID: 1 (0x0001) 00:25:04.112 Controller ID: 65535 (0xffff) 00:25:04.112 Admin Max SQ Size: 32 00:25:04.112 Transport Service Identifier: 4420 00:25:04.112 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:04.112 Transport Address: 10.0.0.1 00:25:04.112 Discovery Log Entry 1 00:25:04.112 ---------------------- 00:25:04.112 Transport Type: 3 (TCP) 00:25:04.112 Address Family: 1 (IPv4) 00:25:04.112 Subsystem Type: 2 (NVM Subsystem) 00:25:04.112 Entry Flags: 00:25:04.112 Duplicate Returned Information: 0 00:25:04.112 Explicit Persistent Connection Support for Discovery: 0 00:25:04.112 Transport Requirements: 00:25:04.112 Secure Channel: Not Specified 00:25:04.112 Port ID: 1 (0x0001) 00:25:04.112 Controller ID: 65535 (0xffff) 00:25:04.112 Admin Max SQ Size: 32 00:25:04.112 Transport Service Identifier: 4420 00:25:04.112 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:04.112 Transport Address: 10.0.0.1 00:25:04.112 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:04.112 get_feature(0x01) failed 00:25:04.112 get_feature(0x02) failed 00:25:04.112 get_feature(0x04) failed 00:25:04.112 ===================================================== 00:25:04.112 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:04.112 ===================================================== 00:25:04.112 Controller Capabilities/Features 00:25:04.112 ================================ 00:25:04.112 Vendor ID: 0000 00:25:04.112 Subsystem Vendor ID: 0000 00:25:04.112 Serial Number: cce661518116239351ba 00:25:04.112 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.112 Firmware Version: 6.8.9-20 00:25:04.112 Recommended Arb Burst: 6 00:25:04.112 IEEE OUI Identifier: 00 00 00 00:25:04.112 Multi-path I/O 00:25:04.112 May have multiple subsystem ports: Yes 00:25:04.112 May have multiple controllers: Yes 00:25:04.112 Associated with SR-IOV VF: No 00:25:04.112 Max Data Transfer Size: Unlimited 00:25:04.112 Max Number of Namespaces: 1024 00:25:04.112 Max Number of I/O Queues: 128 00:25:04.112 NVMe Specification Version (VS): 1.3 00:25:04.112 NVMe Specification Version (Identify): 1.3 00:25:04.112 Maximum Queue Entries: 1024 00:25:04.112 Contiguous Queues Required: No 00:25:04.112 Arbitration Mechanisms Supported 00:25:04.112 Weighted Round Robin: Not Supported 00:25:04.112 Vendor Specific: Not Supported 00:25:04.112 Reset Timeout: 7500 ms 00:25:04.112 Doorbell Stride: 4 bytes 00:25:04.112 NVM Subsystem Reset: Not Supported 00:25:04.112 Command Sets Supported 00:25:04.112 NVM Command Set: Supported 00:25:04.112 Boot Partition: Not Supported 00:25:04.112 Memory Page Size Minimum: 4096 bytes 00:25:04.112 Memory Page Size Maximum: 4096 bytes 00:25:04.112 Persistent Memory Region: Not Supported 00:25:04.112 Optional Asynchronous Events Supported 00:25:04.112 Namespace Attribute Notices: Supported 00:25:04.112 Firmware Activation Notices: Not Supported 00:25:04.112 ANA Change Notices: Supported 00:25:04.112 PLE Aggregate Log Change Notices: Not Supported 00:25:04.112 LBA Status Info Alert Notices: Not Supported 00:25:04.112 EGE Aggregate Log Change Notices: Not Supported 00:25:04.112 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.112 Zone Descriptor Change Notices: Not Supported 00:25:04.112 Discovery Log Change Notices: Not Supported 00:25:04.112 Controller Attributes 00:25:04.112 128-bit Host Identifier: Supported 00:25:04.112 Non-Operational Permissive Mode: Not Supported 00:25:04.112 NVM Sets: Not Supported 00:25:04.112 Read Recovery Levels: Not Supported 00:25:04.112 Endurance Groups: Not Supported 00:25:04.112 Predictable Latency Mode: Not Supported 00:25:04.112 Traffic Based Keep ALive: Supported 00:25:04.113 Namespace Granularity: Not Supported 00:25:04.113 SQ Associations: Not Supported 00:25:04.113 UUID List: Not Supported 00:25:04.113 Multi-Domain Subsystem: Not Supported 00:25:04.113 Fixed Capacity Management: Not Supported 00:25:04.113 Variable Capacity Management: Not Supported 00:25:04.113 Delete Endurance Group: Not Supported 00:25:04.113 Delete NVM Set: Not Supported 00:25:04.113 Extended LBA Formats Supported: Not Supported 00:25:04.113 Flexible Data Placement Supported: Not Supported 00:25:04.113 00:25:04.113 Controller Memory Buffer Support 00:25:04.113 ================================ 00:25:04.113 Supported: No 00:25:04.113 00:25:04.113 Persistent Memory Region Support 00:25:04.113 ================================ 00:25:04.113 Supported: No 00:25:04.113 00:25:04.113 Admin Command Set Attributes 00:25:04.113 ============================ 00:25:04.113 Security Send/Receive: Not Supported 00:25:04.113 Format NVM: Not Supported 00:25:04.113 Firmware Activate/Download: Not Supported 00:25:04.113 Namespace Management: Not Supported 00:25:04.113 Device Self-Test: Not Supported 00:25:04.113 Directives: Not Supported 00:25:04.113 NVMe-MI: Not Supported 00:25:04.113 Virtualization Management: Not Supported 00:25:04.113 Doorbell Buffer Config: Not Supported 00:25:04.113 Get LBA Status Capability: Not Supported 00:25:04.113 Command & Feature Lockdown Capability: Not Supported 00:25:04.113 Abort Command Limit: 4 00:25:04.113 Async Event Request Limit: 4 00:25:04.113 Number of Firmware Slots: N/A 00:25:04.113 Firmware Slot 1 Read-Only: N/A 00:25:04.113 Firmware Activation Without Reset: N/A 00:25:04.113 Multiple Update Detection Support: N/A 00:25:04.113 Firmware Update Granularity: No Information Provided 00:25:04.113 Per-Namespace SMART Log: Yes 00:25:04.113 Asymmetric Namespace Access Log Page: Supported 00:25:04.113 ANA Transition Time : 10 sec 00:25:04.113 00:25:04.113 Asymmetric Namespace Access Capabilities 00:25:04.113 ANA Optimized State : Supported 00:25:04.113 ANA Non-Optimized State : Supported 00:25:04.113 ANA Inaccessible State : Supported 00:25:04.113 ANA Persistent Loss State : Supported 00:25:04.113 ANA Change State : Supported 00:25:04.113 ANAGRPID is not changed : No 00:25:04.113 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:04.113 00:25:04.113 ANA Group Identifier Maximum : 128 00:25:04.113 Number of ANA Group Identifiers : 128 00:25:04.113 Max Number of Allowed Namespaces : 1024 00:25:04.113 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:04.113 Command Effects Log Page: Supported 00:25:04.113 Get Log Page Extended Data: Supported 00:25:04.113 Telemetry Log Pages: Not Supported 00:25:04.113 Persistent Event Log Pages: Not Supported 00:25:04.113 Supported Log Pages Log Page: May Support 00:25:04.113 Commands Supported & Effects Log Page: Not Supported 00:25:04.113 Feature Identifiers & Effects Log Page:May Support 00:25:04.113 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.113 Data Area 4 for Telemetry Log: Not Supported 00:25:04.113 Error Log Page Entries Supported: 128 00:25:04.113 Keep Alive: Supported 00:25:04.113 Keep Alive Granularity: 1000 ms 00:25:04.113 00:25:04.113 NVM Command Set Attributes 00:25:04.113 ========================== 00:25:04.113 Submission Queue Entry Size 00:25:04.113 Max: 64 00:25:04.113 Min: 64 00:25:04.113 Completion Queue Entry Size 00:25:04.113 Max: 16 00:25:04.113 Min: 16 00:25:04.113 Number of Namespaces: 1024 00:25:04.113 Compare Command: Not Supported 00:25:04.113 Write Uncorrectable Command: Not Supported 00:25:04.113 Dataset Management Command: Supported 00:25:04.113 Write Zeroes Command: Supported 00:25:04.113 Set Features Save Field: Not Supported 00:25:04.113 Reservations: Not Supported 00:25:04.113 Timestamp: Not Supported 00:25:04.113 Copy: Not Supported 00:25:04.113 Volatile Write Cache: Present 00:25:04.113 Atomic Write Unit (Normal): 1 00:25:04.113 Atomic Write Unit (PFail): 1 00:25:04.113 Atomic Compare & Write Unit: 1 00:25:04.113 Fused Compare & Write: Not Supported 00:25:04.113 Scatter-Gather List 00:25:04.113 SGL Command Set: Supported 00:25:04.113 SGL Keyed: Not Supported 00:25:04.113 SGL Bit Bucket Descriptor: Not Supported 00:25:04.113 SGL Metadata Pointer: Not Supported 00:25:04.113 Oversized SGL: Not Supported 00:25:04.113 SGL Metadata Address: Not Supported 00:25:04.113 SGL Offset: Supported 00:25:04.113 Transport SGL Data Block: Not Supported 00:25:04.113 Replay Protected Memory Block: Not Supported 00:25:04.113 00:25:04.113 Firmware Slot Information 00:25:04.113 ========================= 00:25:04.113 Active slot: 0 00:25:04.113 00:25:04.113 Asymmetric Namespace Access 00:25:04.113 =========================== 00:25:04.113 Change Count : 0 00:25:04.113 Number of ANA Group Descriptors : 1 00:25:04.113 ANA Group Descriptor : 0 00:25:04.113 ANA Group ID : 1 00:25:04.113 Number of NSID Values : 1 00:25:04.113 Change Count : 0 00:25:04.113 ANA State : 1 00:25:04.113 Namespace Identifier : 1 00:25:04.113 00:25:04.113 Commands Supported and Effects 00:25:04.113 ============================== 00:25:04.113 Admin Commands 00:25:04.113 -------------- 00:25:04.113 Get Log Page (02h): Supported 00:25:04.113 Identify (06h): Supported 00:25:04.113 Abort (08h): Supported 00:25:04.113 Set Features (09h): Supported 00:25:04.113 Get Features (0Ah): Supported 00:25:04.113 Asynchronous Event Request (0Ch): Supported 00:25:04.113 Keep Alive (18h): Supported 00:25:04.113 I/O Commands 00:25:04.113 ------------ 00:25:04.113 Flush (00h): Supported 00:25:04.113 Write (01h): Supported LBA-Change 00:25:04.113 Read (02h): Supported 00:25:04.113 Write Zeroes (08h): Supported LBA-Change 00:25:04.113 Dataset Management (09h): Supported 00:25:04.113 00:25:04.113 Error Log 00:25:04.113 ========= 00:25:04.113 Entry: 0 00:25:04.113 Error Count: 0x3 00:25:04.113 Submission Queue Id: 0x0 00:25:04.113 Command Id: 0x5 00:25:04.113 Phase Bit: 0 00:25:04.113 Status Code: 0x2 00:25:04.113 Status Code Type: 0x0 00:25:04.113 Do Not Retry: 1 00:25:04.113 Error Location: 0x28 00:25:04.113 LBA: 0x0 00:25:04.113 Namespace: 0x0 00:25:04.113 Vendor Log Page: 0x0 00:25:04.113 ----------- 00:25:04.113 Entry: 1 00:25:04.113 Error Count: 0x2 00:25:04.113 Submission Queue Id: 0x0 00:25:04.113 Command Id: 0x5 00:25:04.113 Phase Bit: 0 00:25:04.113 Status Code: 0x2 00:25:04.113 Status Code Type: 0x0 00:25:04.113 Do Not Retry: 1 00:25:04.113 Error Location: 0x28 00:25:04.113 LBA: 0x0 00:25:04.113 Namespace: 0x0 00:25:04.113 Vendor Log Page: 0x0 00:25:04.113 ----------- 00:25:04.113 Entry: 2 00:25:04.113 Error Count: 0x1 00:25:04.113 Submission Queue Id: 0x0 00:25:04.113 Command Id: 0x4 00:25:04.113 Phase Bit: 0 00:25:04.113 Status Code: 0x2 00:25:04.113 Status Code Type: 0x0 00:25:04.113 Do Not Retry: 1 00:25:04.113 Error Location: 0x28 00:25:04.113 LBA: 0x0 00:25:04.113 Namespace: 0x0 00:25:04.114 Vendor Log Page: 0x0 00:25:04.114 00:25:04.114 Number of Queues 00:25:04.114 ================ 00:25:04.114 Number of I/O Submission Queues: 128 00:25:04.114 Number of I/O Completion Queues: 128 00:25:04.114 00:25:04.114 ZNS Specific Controller Data 00:25:04.114 ============================ 00:25:04.114 Zone Append Size Limit: 0 00:25:04.114 00:25:04.114 00:25:04.114 Active Namespaces 00:25:04.114 ================= 00:25:04.114 get_feature(0x05) failed 00:25:04.114 Namespace ID:1 00:25:04.114 Command Set Identifier: NVM (00h) 00:25:04.114 Deallocate: Supported 00:25:04.114 Deallocated/Unwritten Error: Not Supported 00:25:04.114 Deallocated Read Value: Unknown 00:25:04.114 Deallocate in Write Zeroes: Not Supported 00:25:04.114 Deallocated Guard Field: 0xFFFF 00:25:04.114 Flush: Supported 00:25:04.114 Reservation: Not Supported 00:25:04.114 Namespace Sharing Capabilities: Multiple Controllers 00:25:04.114 Size (in LBAs): 1953525168 (931GiB) 00:25:04.114 Capacity (in LBAs): 1953525168 (931GiB) 00:25:04.114 Utilization (in LBAs): 1953525168 (931GiB) 00:25:04.114 UUID: 416685f1-96f9-4f65-993b-d5916026e107 00:25:04.114 Thin Provisioning: Not Supported 00:25:04.114 Per-NS Atomic Units: Yes 00:25:04.114 Atomic Boundary Size (Normal): 0 00:25:04.114 Atomic Boundary Size (PFail): 0 00:25:04.114 Atomic Boundary Offset: 0 00:25:04.114 NGUID/EUI64 Never Reused: No 00:25:04.114 ANA group ID: 1 00:25:04.114 Namespace Write Protected: No 00:25:04.114 Number of LBA Formats: 1 00:25:04.114 Current LBA Format: LBA Format #00 00:25:04.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:04.114 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.114 rmmod nvme_tcp 00:25:04.114 rmmod nvme_fabrics 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.114 15:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.029 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:06.288 15:34:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:06.288 15:34:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:09.576 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.576 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.835 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:10.094 00:25:10.094 real 0m16.697s 00:25:10.094 user 0m4.396s 00:25:10.094 sys 0m8.725s 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.094 ************************************ 00:25:10.094 END TEST nvmf_identify_kernel_target 00:25:10.094 ************************************ 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.094 ************************************ 00:25:10.094 START TEST nvmf_auth_host 00:25:10.094 ************************************ 00:25:10.094 15:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:10.354 * Looking for test storage... 00:25:10.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.354 --rc genhtml_branch_coverage=1 00:25:10.354 --rc genhtml_function_coverage=1 00:25:10.354 --rc genhtml_legend=1 00:25:10.354 --rc geninfo_all_blocks=1 00:25:10.354 --rc geninfo_unexecuted_blocks=1 00:25:10.354 00:25:10.354 ' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.354 --rc genhtml_branch_coverage=1 00:25:10.354 --rc genhtml_function_coverage=1 00:25:10.354 --rc genhtml_legend=1 00:25:10.354 --rc geninfo_all_blocks=1 00:25:10.354 --rc geninfo_unexecuted_blocks=1 00:25:10.354 00:25:10.354 ' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.354 --rc genhtml_branch_coverage=1 00:25:10.354 --rc genhtml_function_coverage=1 00:25:10.354 --rc genhtml_legend=1 00:25:10.354 --rc geninfo_all_blocks=1 00:25:10.354 --rc geninfo_unexecuted_blocks=1 00:25:10.354 00:25:10.354 ' 00:25:10.354 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.354 --rc genhtml_branch_coverage=1 00:25:10.354 --rc genhtml_function_coverage=1 00:25:10.354 --rc genhtml_legend=1 00:25:10.354 --rc geninfo_all_blocks=1 00:25:10.354 --rc geninfo_unexecuted_blocks=1 00:25:10.354 00:25:10.354 ' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.355 15:34:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.933 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.934 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.934 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.934 15:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.934 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.934 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.934 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.934 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:25:16.934 00:25:16.934 --- 10.0.0.2 ping statistics --- 00:25:16.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.934 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:25:16.934 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:16.934 00:25:16.934 --- 10.0.0.1 ping statistics --- 00:25:16.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.935 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2293517 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2293517 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2293517 ']' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a98b828bee24659f729503ead3960a9 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LZR 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a98b828bee24659f729503ead3960a9 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a98b828bee24659f729503ead3960a9 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a98b828bee24659f729503ead3960a9 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LZR 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LZR 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LZR 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a8c95f2294ae720513510f9998501e4738cb3c78cb3f5fd3abf6d0023afaae0f 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vFP 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a8c95f2294ae720513510f9998501e4738cb3c78cb3f5fd3abf6d0023afaae0f 3 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a8c95f2294ae720513510f9998501e4738cb3c78cb3f5fd3abf6d0023afaae0f 3 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a8c95f2294ae720513510f9998501e4738cb3c78cb3f5fd3abf6d0023afaae0f 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vFP 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vFP 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vFP 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=136336293ee702b2a21feaffdd96801f68a66833fa944ec2 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jnW 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 136336293ee702b2a21feaffdd96801f68a66833fa944ec2 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 136336293ee702b2a21feaffdd96801f68a66833fa944ec2 0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=136336293ee702b2a21feaffdd96801f68a66833fa944ec2 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jnW 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jnW 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jnW 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=811972357e7f12197f9b5aa99f257e69d2708341336e6549 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xWg 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 811972357e7f12197f9b5aa99f257e69d2708341336e6549 2 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 811972357e7f12197f9b5aa99f257e69d2708341336e6549 2 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=811972357e7f12197f9b5aa99f257e69d2708341336e6549 00:25:16.935 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xWg 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xWg 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xWg 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d98cb1f116d6887a06822be147432cae 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jb9 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d98cb1f116d6887a06822be147432cae 1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d98cb1f116d6887a06822be147432cae 1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d98cb1f116d6887a06822be147432cae 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jb9 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jb9 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jb9 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5adbcfccf217f6698af22dba8748b892 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eeK 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5adbcfccf217f6698af22dba8748b892 1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5adbcfccf217f6698af22dba8748b892 1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5adbcfccf217f6698af22dba8748b892 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eeK 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eeK 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eeK 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f516a96197ad05420e9c9ba1c1b981c22c8afe2c7388fdc7 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3tu 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f516a96197ad05420e9c9ba1c1b981c22c8afe2c7388fdc7 2 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f516a96197ad05420e9c9ba1c1b981c22c8afe2c7388fdc7 2 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f516a96197ad05420e9c9ba1c1b981c22c8afe2c7388fdc7 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.936 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3tu 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3tu 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3tu 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e5dcae12e215bbbed9c97374aeecb07d 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.upo 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e5dcae12e215bbbed9c97374aeecb07d 0 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e5dcae12e215bbbed9c97374aeecb07d 0 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e5dcae12e215bbbed9c97374aeecb07d 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.upo 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.upo 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.upo 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7dd7d486883749e9cfa69ea7619695ef318ca31a598e6df8de6ae96d4ddaa97c 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AeR 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7dd7d486883749e9cfa69ea7619695ef318ca31a598e6df8de6ae96d4ddaa97c 3 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7dd7d486883749e9cfa69ea7619695ef318ca31a598e6df8de6ae96d4ddaa97c 3 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7dd7d486883749e9cfa69ea7619695ef318ca31a598e6df8de6ae96d4ddaa97c 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AeR 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AeR 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.AeR 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2293517 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2293517 ']' 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.195 15:34:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LZR 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vFP ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vFP 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jnW 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xWg ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xWg 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jb9 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eeK ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eeK 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3tu 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.upo ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.upo 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.AeR 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:17.455 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:17.456 15:34:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:20.053 Waiting for block devices as requested 00:25:20.053 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:20.312 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:20.312 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:20.571 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:20.571 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:20.571 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:20.571 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:20.829 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:20.829 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:20.829 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:20.829 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:21.087 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:21.088 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:21.088 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:21.346 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:21.346 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:21.346 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:21.914 No valid GPT data, bailing 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:21.914 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:22.174 00:25:22.174 Discovery Log Number of Records 2, Generation counter 2 00:25:22.174 =====Discovery Log Entry 0====== 00:25:22.174 trtype: tcp 00:25:22.174 adrfam: ipv4 00:25:22.174 subtype: current discovery subsystem 00:25:22.174 treq: not specified, sq flow control disable supported 00:25:22.174 portid: 1 00:25:22.174 trsvcid: 4420 00:25:22.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:22.174 traddr: 10.0.0.1 00:25:22.174 eflags: none 00:25:22.174 sectype: none 00:25:22.174 =====Discovery Log Entry 1====== 00:25:22.174 trtype: tcp 00:25:22.174 adrfam: ipv4 00:25:22.174 subtype: nvme subsystem 00:25:22.174 treq: not specified, sq flow control disable supported 00:25:22.174 portid: 1 00:25:22.174 trsvcid: 4420 00:25:22.174 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:22.174 traddr: 10.0.0.1 00:25:22.174 eflags: none 00:25:22.174 sectype: none 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.174 15:34:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.174 nvme0n1 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.174 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.434 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.435 nvme0n1 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.435 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.694 nvme0n1 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.694 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.695 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 nvme0n1 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.954 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.955 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.214 nvme0n1 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.214 15:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.214 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 nvme0n1 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.473 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.474 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.732 nvme0n1 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:23.732 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.733 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.992 nvme0n1 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.992 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.250 nvme0n1 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.250 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.251 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.251 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.251 15:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.251 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.510 nvme0n1 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.510 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 nvme0n1 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.769 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.770 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 nvme0n1 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:25.029 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.030 15:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 nvme0n1 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.289 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.548 nvme0n1 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.548 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.807 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 nvme0n1 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.067 15:34:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.327 nvme0n1 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.327 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.894 nvme0n1 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.894 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.895 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.153 nvme0n1 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.153 15:34:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.153 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 nvme0n1 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:27.720 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.721 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.979 nvme0n1 00:25:27.979 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.979 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.979 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.979 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.238 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.239 15:34:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.497 nvme0n1 00:25:28.497 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.497 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.497 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.498 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.757 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.757 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.757 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.325 nvme0n1 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.325 15:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.325 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.326 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.893 nvme0n1 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.893 15:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 nvme0n1 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.461 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.719 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.720 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.720 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.720 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.720 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 nvme0n1 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:31.287 15:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.287 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.880 nvme0n1 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.880 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.881 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.140 nvme0n1 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:32.140 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 nvme0n1 00:25:32.141 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.141 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.141 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.399 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.400 nvme0n1 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.400 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.658 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.658 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.658 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 nvme0n1 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.659 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 nvme0n1 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 nvme0n1 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 15:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.176 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 nvme0n1 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.435 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.436 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.436 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.436 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.436 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.436 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.694 nvme0n1 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.694 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 nvme0n1 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.953 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 nvme0n1 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 15:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.213 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.214 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 nvme0n1 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:34.473 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.474 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.732 nvme0n1 00:25:34.732 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.732 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.732 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.732 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.732 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.733 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.991 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.992 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 nvme0n1 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.509 nvme0n1 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:35.509 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 nvme0n1 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.028 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.029 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.029 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.029 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.029 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.029 15:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.287 nvme0n1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.288 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.856 nvme0n1 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.856 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.114 nvme0n1 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.114 15:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.114 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.114 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.114 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.114 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.373 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.374 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.374 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.374 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.374 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.374 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.633 nvme0n1 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.633 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.202 nvme0n1 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.202 15:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 nvme0n1 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.770 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.771 15:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.339 nvme0n1 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.339 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.598 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.599 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.167 nvme0n1 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.167 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.168 15:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.736 nvme0n1 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.736 15:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.304 nvme0n1 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.304 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 nvme0n1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.564 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.824 nvme0n1 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.824 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.084 nvme0n1 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.084 15:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 nvme0n1 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.344 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 nvme0n1 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.603 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.604 nvme0n1 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.604 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.863 nvme0n1 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.863 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 15:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 nvme0n1 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 nvme0n1 00:25:43.381 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.382 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.382 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.382 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.382 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.640 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 nvme0n1 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.900 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.158 nvme0n1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.158 15:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 nvme0n1 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.418 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.677 nvme0n1 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.677 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.936 nvme0n1 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.936 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.195 15:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.452 nvme0n1 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.453 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.711 nvme0n1 00:25:45.711 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.711 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.711 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.711 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.711 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.969 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.969 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.969 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.969 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.970 15:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.228 nvme0n1 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:46.228 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.486 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.487 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.745 nvme0n1 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.745 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.311 nvme0n1 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.311 15:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.311 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.570 nvme0n1 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.570 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE5OGI4MjhiZWUyNDY1OWY3Mjk1MDNlYWQzOTYwYTkyrovr: 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjOTVmMjI5NGFlNzIwNTEzNTEwZjk5OTg1MDFlNDczOGNiM2M3OGNiM2Y1ZmQzYWJmNmQwMDIzYWZhYWUwZgAO2Fw=: 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.901 15:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.188 nvme0n1 00:25:48.188 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.189 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.189 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.189 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.189 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.189 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.447 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 nvme0n1 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 15:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 nvme0n1 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjUxNmE5NjE5N2FkMDU0MjBlOWM5YmExYzFiOTgxYzIyYzhhZmUyYzczODhmZGM3brkc/A==: 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTVkY2FlMTJlMjE1YmJiZWQ5Yzk3Mzc0YWVlY2IwN2T9zPTz: 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.582 15:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.150 nvme0n1 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.150 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2RkN2Q0ODY4ODM3NDllOWNmYTY5ZWE3NjE5Njk1ZWYzMThjYTMxYTU5OGU2ZGY4ZGU2YWU5NmQ0ZGRhYTk3Yzwjfuw=: 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.409 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.976 nvme0n1 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.976 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.976 request: 00:25:50.976 { 00:25:50.976 "name": "nvme0", 00:25:50.976 "trtype": "tcp", 00:25:50.976 "traddr": "10.0.0.1", 00:25:50.976 "adrfam": "ipv4", 00:25:50.976 "trsvcid": "4420", 00:25:50.976 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:50.976 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:50.976 "prchk_reftag": false, 00:25:50.976 "prchk_guard": false, 00:25:50.976 "hdgst": false, 00:25:50.976 "ddgst": false, 00:25:50.976 "allow_unrecognized_csi": false, 00:25:50.977 "method": "bdev_nvme_attach_controller", 00:25:50.977 "req_id": 1 00:25:50.977 } 00:25:50.977 Got JSON-RPC error response 00:25:50.977 response: 00:25:50.977 { 00:25:50.977 "code": -5, 00:25:50.977 "message": "Input/output error" 00:25:50.977 } 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.977 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.235 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.236 request: 00:25:51.236 { 00:25:51.236 "name": "nvme0", 00:25:51.236 "trtype": "tcp", 00:25:51.236 "traddr": "10.0.0.1", 00:25:51.236 "adrfam": "ipv4", 00:25:51.236 "trsvcid": "4420", 00:25:51.236 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.236 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.236 "prchk_reftag": false, 00:25:51.236 "prchk_guard": false, 00:25:51.236 "hdgst": false, 00:25:51.236 "ddgst": false, 00:25:51.236 "dhchap_key": "key2", 00:25:51.236 "allow_unrecognized_csi": false, 00:25:51.236 "method": "bdev_nvme_attach_controller", 00:25:51.236 "req_id": 1 00:25:51.236 } 00:25:51.236 Got JSON-RPC error response 00:25:51.236 response: 00:25:51.236 { 00:25:51.236 "code": -5, 00:25:51.236 "message": "Input/output error" 00:25:51.236 } 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:51.236 15:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.236 request: 00:25:51.236 { 00:25:51.236 "name": "nvme0", 00:25:51.236 "trtype": "tcp", 00:25:51.236 "traddr": "10.0.0.1", 00:25:51.236 "adrfam": "ipv4", 00:25:51.236 "trsvcid": "4420", 00:25:51.236 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:51.236 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:51.236 "prchk_reftag": false, 00:25:51.236 "prchk_guard": false, 00:25:51.236 "hdgst": false, 00:25:51.236 "ddgst": false, 00:25:51.236 "dhchap_key": "key1", 00:25:51.236 "dhchap_ctrlr_key": "ckey2", 00:25:51.236 "allow_unrecognized_csi": false, 00:25:51.236 "method": "bdev_nvme_attach_controller", 00:25:51.236 "req_id": 1 00:25:51.236 } 00:25:51.236 Got JSON-RPC error response 00:25:51.236 response: 00:25:51.236 { 00:25:51.236 "code": -5, 00:25:51.236 "message": "Input/output error" 00:25:51.236 } 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.236 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 nvme0n1 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.495 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.495 request: 00:25:51.495 { 00:25:51.495 "name": "nvme0", 00:25:51.495 "dhchap_key": "key1", 00:25:51.754 "dhchap_ctrlr_key": "ckey2", 00:25:51.754 "method": "bdev_nvme_set_keys", 00:25:51.754 "req_id": 1 00:25:51.754 } 00:25:51.754 Got JSON-RPC error response 00:25:51.754 response: 00:25:51.754 { 00:25:51.754 "code": -13, 00:25:51.754 "message": "Permission denied" 00:25:51.754 } 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:51.754 15:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:52.690 15:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:53.625 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.625 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:53.625 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.625 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.625 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTM2MzM2MjkzZWU3MDJiMmEyMWZlYWZmZGQ5NjgwMWY2OGE2NjgzM2ZhOTQ0ZWMy6dC0Dg==: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODExOTcyMzU3ZTdmMTIxOTdmOWI1YWE5OWYyNTdlNjlkMjcwODM0MTMzNmU2NTQ5XdL7Qg==: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 nvme0n1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4Y2IxZjExNmQ2ODg3YTA2ODIyYmUxNDc0MzJjYWXeaHBL: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFkYmNmY2NmMjE3ZjY2OThhZjIyZGJhODc0OGI4OTJlBVMt: 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 request: 00:25:53.885 { 00:25:53.885 "name": "nvme0", 00:25:53.885 "dhchap_key": "key2", 00:25:53.885 "dhchap_ctrlr_key": "ckey1", 00:25:53.885 "method": "bdev_nvme_set_keys", 00:25:53.885 "req_id": 1 00:25:53.885 } 00:25:53.885 Got JSON-RPC error response 00:25:53.885 response: 00:25:53.885 { 00:25:53.885 "code": -13, 00:25:53.885 "message": "Permission denied" 00:25:53.885 } 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.144 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:54.144 15:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.078 rmmod nvme_tcp 00:25:55.078 rmmod nvme_fabrics 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2293517 ']' 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2293517 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2293517 ']' 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2293517 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2293517 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2293517' 00:25:55.078 killing process with pid 2293517 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2293517 00:25:55.078 15:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2293517 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.337 15:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:57.875 15:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:00.420 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:00.420 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:01.357 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:01.357 15:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LZR /tmp/spdk.key-null.jnW /tmp/spdk.key-sha256.jb9 /tmp/spdk.key-sha384.3tu /tmp/spdk.key-sha512.AeR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:01.357 15:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:04.651 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:04.651 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.651 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:04.651 00:26:04.651 real 0m54.149s 00:26:04.651 user 0m48.846s 00:26:04.651 sys 0m12.673s 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.651 ************************************ 00:26:04.651 END TEST nvmf_auth_host 00:26:04.651 ************************************ 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.651 ************************************ 00:26:04.651 START TEST nvmf_digest 00:26:04.651 ************************************ 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:04.651 * Looking for test storage... 00:26:04.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:04.651 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.652 --rc genhtml_branch_coverage=1 00:26:04.652 --rc genhtml_function_coverage=1 00:26:04.652 --rc genhtml_legend=1 00:26:04.652 --rc geninfo_all_blocks=1 00:26:04.652 --rc geninfo_unexecuted_blocks=1 00:26:04.652 00:26:04.652 ' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.652 --rc genhtml_branch_coverage=1 00:26:04.652 --rc genhtml_function_coverage=1 00:26:04.652 --rc genhtml_legend=1 00:26:04.652 --rc geninfo_all_blocks=1 00:26:04.652 --rc geninfo_unexecuted_blocks=1 00:26:04.652 00:26:04.652 ' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.652 --rc genhtml_branch_coverage=1 00:26:04.652 --rc genhtml_function_coverage=1 00:26:04.652 --rc genhtml_legend=1 00:26:04.652 --rc geninfo_all_blocks=1 00:26:04.652 --rc geninfo_unexecuted_blocks=1 00:26:04.652 00:26:04.652 ' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.652 --rc genhtml_branch_coverage=1 00:26:04.652 --rc genhtml_function_coverage=1 00:26:04.652 --rc genhtml_legend=1 00:26:04.652 --rc geninfo_all_blocks=1 00:26:04.652 --rc geninfo_unexecuted_blocks=1 00:26:04.652 00:26:04.652 ' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:04.652 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.653 15:35:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:11.220 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:11.220 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.220 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:11.221 Found net devices under 0000:86:00.0: cvl_0_0 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:11.221 Found net devices under 0000:86:00.1: cvl_0_1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:26:11.221 00:26:11.221 --- 10.0.0.2 ping statistics --- 00:26:11.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.221 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:26:11.221 00:26:11.221 --- 10.0.0.1 ping statistics --- 00:26:11.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.221 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.221 ************************************ 00:26:11.221 START TEST nvmf_digest_clean 00:26:11.221 ************************************ 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2307369 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2307369 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2307369 ']' 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.221 15:35:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.221 [2024-11-20 15:35:14.384226] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:11.221 [2024-11-20 15:35:14.384276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.221 [2024-11-20 15:35:14.468022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.221 [2024-11-20 15:35:14.507310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.221 [2024-11-20 15:35:14.507347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.222 [2024-11-20 15:35:14.507356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.222 [2024-11-20 15:35:14.507362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.222 [2024-11-20 15:35:14.507367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.222 [2024-11-20 15:35:14.507967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.481 null0 00:26:11.481 [2024-11-20 15:35:15.348918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.481 [2024-11-20 15:35:15.373144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2307486 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2307486 /var/tmp/bperf.sock 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2307486 ']' 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.481 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.740 [2024-11-20 15:35:15.425605] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:11.740 [2024-11-20 15:35:15.425647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307486 ] 00:26:11.740 [2024-11-20 15:35:15.499714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.740 [2024-11-20 15:35:15.542294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.740 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.740 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.740 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.740 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.740 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.999 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.999 15:35:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.566 nvme0n1 00:26:12.566 15:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.566 15:35:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.566 Running I/O for 2 seconds... 00:26:14.880 25017.00 IOPS, 97.72 MiB/s [2024-11-20T14:35:18.788Z] 24913.50 IOPS, 97.32 MiB/s 00:26:14.880 Latency(us) 00:26:14.880 [2024-11-20T14:35:18.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.880 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:14.880 nvme0n1 : 2.00 24934.19 97.40 0.00 0.00 5128.49 2578.70 12081.42 00:26:14.880 [2024-11-20T14:35:18.789Z] =================================================================================================================== 00:26:14.881 [2024-11-20T14:35:18.789Z] Total : 24934.19 97.40 0.00 0.00 5128.49 2578.70 12081.42 00:26:14.881 { 00:26:14.881 "results": [ 00:26:14.881 { 00:26:14.881 "job": "nvme0n1", 00:26:14.881 "core_mask": "0x2", 00:26:14.881 "workload": "randread", 00:26:14.881 "status": "finished", 00:26:14.881 "queue_depth": 128, 00:26:14.881 "io_size": 4096, 00:26:14.881 "runtime": 2.003474, 00:26:14.881 "iops": 24934.189313163035, 00:26:14.881 "mibps": 97.3991770045431, 00:26:14.881 "io_failed": 0, 00:26:14.881 "io_timeout": 0, 00:26:14.881 "avg_latency_us": 5128.486678567233, 00:26:14.881 "min_latency_us": 2578.6991304347825, 00:26:14.881 "max_latency_us": 12081.419130434782 00:26:14.881 } 00:26:14.881 ], 00:26:14.881 "core_count": 1 00:26:14.881 } 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.881 | select(.opcode=="crc32c") 00:26:14.881 | "\(.module_name) \(.executed)"' 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2307486 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2307486 ']' 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2307486 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2307486 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2307486' 00:26:14.881 killing process with pid 2307486 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2307486 00:26:14.881 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.881 00:26:14.881 Latency(us) 00:26:14.881 [2024-11-20T14:35:18.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.881 [2024-11-20T14:35:18.789Z] =================================================================================================================== 00:26:14.881 [2024-11-20T14:35:18.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.881 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2307486 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2308094 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2308094 /var/tmp/bperf.sock 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2308094 ']' 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.140 15:35:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.140 [2024-11-20 15:35:18.852193] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:15.140 [2024-11-20 15:35:18.852244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308094 ] 00:26:15.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.140 Zero copy mechanism will not be used. 00:26:15.140 [2024-11-20 15:35:18.926209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.140 [2024-11-20 15:35:18.966716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.140 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.140 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:15.140 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.140 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.140 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.400 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.400 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.966 nvme0n1 00:26:15.966 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.966 15:35:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.966 Zero copy mechanism will not be used. 00:26:15.966 Running I/O for 2 seconds... 00:26:18.276 5658.00 IOPS, 707.25 MiB/s [2024-11-20T14:35:22.184Z] 5854.00 IOPS, 731.75 MiB/s 00:26:18.276 Latency(us) 00:26:18.276 [2024-11-20T14:35:22.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:18.276 nvme0n1 : 2.00 5855.52 731.94 0.00 0.00 2729.83 644.67 6724.56 00:26:18.276 [2024-11-20T14:35:22.184Z] =================================================================================================================== 00:26:18.276 [2024-11-20T14:35:22.184Z] Total : 5855.52 731.94 0.00 0.00 2729.83 644.67 6724.56 00:26:18.276 { 00:26:18.276 "results": [ 00:26:18.276 { 00:26:18.276 "job": "nvme0n1", 00:26:18.276 "core_mask": "0x2", 00:26:18.276 "workload": "randread", 00:26:18.276 "status": "finished", 00:26:18.276 "queue_depth": 16, 00:26:18.276 "io_size": 131072, 00:26:18.276 "runtime": 2.002215, 00:26:18.276 "iops": 5855.515017118541, 00:26:18.276 "mibps": 731.9393771398177, 00:26:18.276 "io_failed": 0, 00:26:18.276 "io_timeout": 0, 00:26:18.276 "avg_latency_us": 2729.8293967039, 00:26:18.276 "min_latency_us": 644.6747826086956, 00:26:18.276 "max_latency_us": 6724.5634782608695 00:26:18.276 } 00:26:18.276 ], 00:26:18.276 "core_count": 1 00:26:18.276 } 00:26:18.276 15:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:18.276 15:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:18.276 15:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:18.276 15:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:18.276 | select(.opcode=="crc32c") 00:26:18.276 | "\(.module_name) \(.executed)"' 00:26:18.276 15:35:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2308094 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2308094 ']' 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2308094 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308094 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308094' 00:26:18.276 killing process with pid 2308094 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2308094 00:26:18.276 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.276 00:26:18.276 Latency(us) 00:26:18.276 [2024-11-20T14:35:22.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.276 [2024-11-20T14:35:22.184Z] =================================================================================================================== 00:26:18.276 [2024-11-20T14:35:22.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.276 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2308094 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2308569 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2308569 /var/tmp/bperf.sock 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2308569 ']' 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.535 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.535 [2024-11-20 15:35:22.279635] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:18.535 [2024-11-20 15:35:22.279684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308569 ] 00:26:18.535 [2024-11-20 15:35:22.354259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.535 [2024-11-20 15:35:22.391693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.795 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.795 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.795 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.795 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.795 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.053 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.053 15:35:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.312 nvme0n1 00:26:19.312 15:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.312 15:35:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.312 Running I/O for 2 seconds... 00:26:21.626 27856.00 IOPS, 108.81 MiB/s [2024-11-20T14:35:25.534Z] 27944.50 IOPS, 109.16 MiB/s 00:26:21.626 Latency(us) 00:26:21.626 [2024-11-20T14:35:25.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.626 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:21.626 nvme0n1 : 2.00 27976.68 109.28 0.00 0.00 4571.21 1866.35 9061.06 00:26:21.626 [2024-11-20T14:35:25.534Z] =================================================================================================================== 00:26:21.626 [2024-11-20T14:35:25.534Z] Total : 27976.68 109.28 0.00 0.00 4571.21 1866.35 9061.06 00:26:21.626 { 00:26:21.626 "results": [ 00:26:21.626 { 00:26:21.626 "job": "nvme0n1", 00:26:21.626 "core_mask": "0x2", 00:26:21.626 "workload": "randwrite", 00:26:21.626 "status": "finished", 00:26:21.626 "queue_depth": 128, 00:26:21.626 "io_size": 4096, 00:26:21.626 "runtime": 2.002275, 00:26:21.626 "iops": 27976.676530446617, 00:26:21.626 "mibps": 109.2838926970571, 00:26:21.626 "io_failed": 0, 00:26:21.626 "io_timeout": 0, 00:26:21.626 "avg_latency_us": 4571.208370378246, 00:26:21.626 "min_latency_us": 1866.351304347826, 00:26:21.626 "max_latency_us": 9061.064347826086 00:26:21.626 } 00:26:21.626 ], 00:26:21.626 "core_count": 1 00:26:21.626 } 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.626 | select(.opcode=="crc32c") 00:26:21.626 | "\(.module_name) \(.executed)"' 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2308569 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2308569 ']' 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2308569 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308569 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308569' 00:26:21.626 killing process with pid 2308569 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2308569 00:26:21.626 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.626 00:26:21.626 Latency(us) 00:26:21.626 [2024-11-20T14:35:25.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.626 [2024-11-20T14:35:25.534Z] =================================================================================================================== 00:26:21.626 [2024-11-20T14:35:25.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.626 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2308569 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2309254 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2309254 /var/tmp/bperf.sock 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2309254 ']' 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.886 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.886 [2024-11-20 15:35:25.696231] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:21.886 [2024-11-20 15:35:25.696279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309254 ] 00:26:21.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.886 Zero copy mechanism will not be used. 00:26:21.886 [2024-11-20 15:35:25.770301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.145 [2024-11-20 15:35:25.813659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.145 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.145 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:22.145 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:22.145 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:22.145 15:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.403 15:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.404 15:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.662 nvme0n1 00:26:22.662 15:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.662 15:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.662 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.662 Zero copy mechanism will not be used. 00:26:22.662 Running I/O for 2 seconds... 00:26:24.975 6957.00 IOPS, 869.62 MiB/s [2024-11-20T14:35:28.883Z] 6802.50 IOPS, 850.31 MiB/s 00:26:24.975 Latency(us) 00:26:24.975 [2024-11-20T14:35:28.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.975 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:24.975 nvme0n1 : 2.00 6801.07 850.13 0.00 0.00 2348.58 1738.13 8833.11 00:26:24.975 [2024-11-20T14:35:28.883Z] =================================================================================================================== 00:26:24.975 [2024-11-20T14:35:28.883Z] Total : 6801.07 850.13 0.00 0.00 2348.58 1738.13 8833.11 00:26:24.975 { 00:26:24.975 "results": [ 00:26:24.975 { 00:26:24.975 "job": "nvme0n1", 00:26:24.975 "core_mask": "0x2", 00:26:24.975 "workload": "randwrite", 00:26:24.975 "status": "finished", 00:26:24.975 "queue_depth": 16, 00:26:24.975 "io_size": 131072, 00:26:24.975 "runtime": 2.002773, 00:26:24.975 "iops": 6801.070316006856, 00:26:24.975 "mibps": 850.133789500857, 00:26:24.975 "io_failed": 0, 00:26:24.975 "io_timeout": 0, 00:26:24.975 "avg_latency_us": 2348.5792993555347, 00:26:24.975 "min_latency_us": 1738.128695652174, 00:26:24.975 "max_latency_us": 8833.11304347826 00:26:24.975 } 00:26:24.975 ], 00:26:24.975 "core_count": 1 00:26:24.975 } 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.975 | select(.opcode=="crc32c") 00:26:24.975 | "\(.module_name) \(.executed)"' 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2309254 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2309254 ']' 00:26:24.975 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2309254 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309254 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309254' 00:26:24.976 killing process with pid 2309254 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2309254 00:26:24.976 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.976 00:26:24.976 Latency(us) 00:26:24.976 [2024-11-20T14:35:28.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.976 [2024-11-20T14:35:28.884Z] =================================================================================================================== 00:26:24.976 [2024-11-20T14:35:28.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.976 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2309254 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2307369 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2307369 ']' 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2307369 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2307369 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2307369' 00:26:25.234 killing process with pid 2307369 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2307369 00:26:25.234 15:35:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2307369 00:26:25.234 00:26:25.234 real 0m14.771s 00:26:25.234 user 0m27.788s 00:26:25.234 sys 0m4.640s 00:26:25.234 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.234 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.234 ************************************ 00:26:25.234 END TEST nvmf_digest_clean 00:26:25.234 ************************************ 00:26:25.234 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:25.234 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.234 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.235 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:25.492 ************************************ 00:26:25.492 START TEST nvmf_digest_error 00:26:25.492 ************************************ 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2309758 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2309758 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2309758 ']' 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.492 [2024-11-20 15:35:29.225486] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:25.492 [2024-11-20 15:35:29.225528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.492 [2024-11-20 15:35:29.303339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.492 [2024-11-20 15:35:29.343886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.492 [2024-11-20 15:35:29.343925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.492 [2024-11-20 15:35:29.343932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.492 [2024-11-20 15:35:29.343939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.492 [2024-11-20 15:35:29.343944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.492 [2024-11-20 15:35:29.344525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.492 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.751 [2024-11-20 15:35:29.416979] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.751 null0 00:26:25.751 [2024-11-20 15:35:29.511613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.751 [2024-11-20 15:35:29.535826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2309793 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2309793 /var/tmp/bperf.sock 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2309793 ']' 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.751 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.751 [2024-11-20 15:35:29.590064] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:25.751 [2024-11-20 15:35:29.590107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309793 ] 00:26:26.010 [2024-11-20 15:35:29.666631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.010 [2024-11-20 15:35:29.709183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.010 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.010 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:26.010 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.010 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.269 15:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.527 nvme0n1 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.527 15:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.785 Running I/O for 2 seconds... 00:26:26.785 [2024-11-20 15:35:30.478183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.478217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.785 [2024-11-20 15:35:30.478228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.785 [2024-11-20 15:35:30.490349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.490374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.785 [2024-11-20 15:35:30.490383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.785 [2024-11-20 15:35:30.503433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.503455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.785 [2024-11-20 15:35:30.503464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.785 [2024-11-20 15:35:30.512528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.512549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.785 [2024-11-20 15:35:30.512562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.785 [2024-11-20 15:35:30.522513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.522535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.785 [2024-11-20 15:35:30.522543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.785 [2024-11-20 15:35:30.534052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.785 [2024-11-20 15:35:30.534073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.534082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.543469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.543490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.543498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.554543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.554572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.563605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.563627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.563635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.575631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.575652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.575660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.584980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.585001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.596750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.596771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.596779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.607731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.607754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.607763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.617245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.617273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.625844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.625873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.635552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.635573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.635581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.646426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.646446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.646455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.660139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.660159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.660167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.672046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.672066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.672075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-11-20 15:35:30.680395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:26.786 [2024-11-20 15:35:30.680416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-11-20 15:35:30.680424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.692762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.692783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.692792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.704096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.704117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.704125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.713501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.713520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.713528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.723927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.723962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.736549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.736570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.736579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.748315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.045 [2024-11-20 15:35:30.748335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.045 [2024-11-20 15:35:30.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.045 [2024-11-20 15:35:30.756905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.756925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.756933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.766860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.766880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.776526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.776546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.776554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.785721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.785746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.785754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.795420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.795440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.806408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.806428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.806436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.814964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.814984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.824698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.824717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.824725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.834721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.834741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.834749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.843717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.843736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.843744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.853115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.853135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.853143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.863525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.863544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.863552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.874090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.874110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.874118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.883027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.883048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.883056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.894863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.894884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.894892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.905185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.905204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.905213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.915568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.915596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.925043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.925063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.925071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.934126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.934146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.934154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.046 [2024-11-20 15:35:30.943693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.046 [2024-11-20 15:35:30.943714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.046 [2024-11-20 15:35:30.943723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:30.952531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:30.952552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:30.952564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:30.963883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:30.963907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:30.963915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:30.972424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:30.972445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:30.972453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:30.981919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:30.981942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:30.981958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:30.991617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:30.991639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:30.991647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.002457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.002479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.002488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.010551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.010572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.010580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.021074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.021095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.021103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.031550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.031570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.031579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.042913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.042937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.042945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.055492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.055513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.055522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.066462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.066484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.066492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.074297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.074317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.074326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.084733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.084754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.084762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.095030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.095052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.095060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.108235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.108257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.118831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.118853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.118862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.128613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.128635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.128643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.138325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.138346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.138354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.149089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.149111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.149119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.160322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.160343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.160352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.169410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.169431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.169440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.180245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.180267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.180275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.190344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.190365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.306 [2024-11-20 15:35:31.198667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.306 [2024-11-20 15:35:31.198689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.306 [2024-11-20 15:35:31.198697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.211229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.211251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.211259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.223056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.223078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.223089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.231523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.231544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.231552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.241317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.241337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.241345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.252418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.252440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.252449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.265227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.265248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.274396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.274416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.274424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.284124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.284144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.284152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.294697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.294718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.303363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.303384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.303392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.314516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.314538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.314547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.326187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.326207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.326215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.338783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.338805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.338814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.350466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.350488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.350496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.363232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.363254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.363263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.371245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.371267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.371275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.383003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.383033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.395211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.395231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.565 [2024-11-20 15:35:31.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.565 [2024-11-20 15:35:31.404116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.565 [2024-11-20 15:35:31.404136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.404148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.566 [2024-11-20 15:35:31.416247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.566 [2024-11-20 15:35:31.416268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.416276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.566 [2024-11-20 15:35:31.428797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.566 [2024-11-20 15:35:31.428818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.428826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.566 [2024-11-20 15:35:31.437807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.566 [2024-11-20 15:35:31.437827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.437836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.566 [2024-11-20 15:35:31.450316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.566 [2024-11-20 15:35:31.450338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.450346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.566 [2024-11-20 15:35:31.461781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.566 [2024-11-20 15:35:31.461801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.566 [2024-11-20 15:35:31.461809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 24355.00 IOPS, 95.14 MiB/s [2024-11-20T14:35:31.733Z] [2024-11-20 15:35:31.471552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.471574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.471583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.484023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.484046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.484054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.495615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.495636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.495644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.505048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.505077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.505085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.516060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.516082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.516090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.525772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.525792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.525800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.537865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.537885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.537893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.549379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.549399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.549407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.558813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.558833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.558841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.567722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.567743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.567752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.577674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.577702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.587746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.587766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.587773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.596042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.596062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.596069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.606514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.606535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.606542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.617331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.617351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.617359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.627397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.627417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.627425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.637463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.637483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.637491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.646116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.646136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.646144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.655037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.655058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.655066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.664840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.664861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.664869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.677186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.677210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.677218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.688685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.688705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.825 [2024-11-20 15:35:31.688713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.825 [2024-11-20 15:35:31.698126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.825 [2024-11-20 15:35:31.698147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.826 [2024-11-20 15:35:31.698156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.826 [2024-11-20 15:35:31.709487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.826 [2024-11-20 15:35:31.709508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.826 [2024-11-20 15:35:31.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.826 [2024-11-20 15:35:31.721845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:27.826 [2024-11-20 15:35:31.721867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.826 [2024-11-20 15:35:31.721875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.085 [2024-11-20 15:35:31.734230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.085 [2024-11-20 15:35:31.734250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.085 [2024-11-20 15:35:31.734259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.085 [2024-11-20 15:35:31.744274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.085 [2024-11-20 15:35:31.744294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.085 [2024-11-20 15:35:31.744303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.085 [2024-11-20 15:35:31.754512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.085 [2024-11-20 15:35:31.754532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.085 [2024-11-20 15:35:31.754540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.085 [2024-11-20 15:35:31.763319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.085 [2024-11-20 15:35:31.763339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.085 [2024-11-20 15:35:31.763347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.085 [2024-11-20 15:35:31.776203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.085 [2024-11-20 15:35:31.776223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.085 [2024-11-20 15:35:31.776232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.785760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.785780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.785789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.796211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.796232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.796240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.806842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.806862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.806870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.815346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.815366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.815373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.825074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.825093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.825101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.834347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.834367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.834375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.843680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.843700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.843708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.853053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.853073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.853085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.862417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.862437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.862446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.872365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.872385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.872393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.881815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.881836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.881845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.891174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.891195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.891202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.900303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.900323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.900331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.910113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.910133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.910142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.919401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.919421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.919429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.928985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.929006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.929014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.938433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.938457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.938465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.947450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.947471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.086 [2024-11-20 15:35:31.947478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.086 [2024-11-20 15:35:31.957156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.086 [2024-11-20 15:35:31.957177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.087 [2024-11-20 15:35:31.957185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.087 [2024-11-20 15:35:31.966137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.087 [2024-11-20 15:35:31.966158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.087 [2024-11-20 15:35:31.966166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.087 [2024-11-20 15:35:31.977843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.087 [2024-11-20 15:35:31.977863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.087 [2024-11-20 15:35:31.977871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.087 [2024-11-20 15:35:31.987298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.087 [2024-11-20 15:35:31.987318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.087 [2024-11-20 15:35:31.987327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:31.996419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:31.996439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:31.996447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.007922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.007942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.007954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.016908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.016928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.016937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.028570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.028591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.028599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.040871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.040891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.040899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.052146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.052166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.052174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.060904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.060924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.060932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.070397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.070417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.070425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.079779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.079799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.079807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.089224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.089244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.089252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.098584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.098604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.098612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.108000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.108023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.108031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.119059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.119079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.119087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.128419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.128439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.128447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.137901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.137920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.137928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.147523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.347 [2024-11-20 15:35:32.147543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.347 [2024-11-20 15:35:32.147551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.347 [2024-11-20 15:35:32.157158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.157179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.157188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.166139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.166161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.176152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.176172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.176180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.184588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.184609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.184617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.196015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.196035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.196043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.206263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.206283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.206291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.215971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.215991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.215999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.226036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.226057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.226065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.235428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.235448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.235456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.348 [2024-11-20 15:35:32.246082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.348 [2024-11-20 15:35:32.246102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.348 [2024-11-20 15:35:32.246110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.257840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.257860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.257868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.268291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.268312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.268321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.277415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.277436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.277447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.286765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.286786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.286794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.298237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.298258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.298266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.308841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.308861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.308869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.318696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.318717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.318725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.327476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.327496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.327504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.339120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.339140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.339148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.347740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.347759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.347767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.359028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.359049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.359058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.370094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.370119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.370127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.379007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.379029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.379038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.387757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.387778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.387786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.608 [2024-11-20 15:35:32.398021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.608 [2024-11-20 15:35:32.398042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.608 [2024-11-20 15:35:32.398050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.407572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.407594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.407603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.415678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.415701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.415709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.427190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.427213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.427221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.438146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.438170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.438178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.451342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.451364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.451372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 [2024-11-20 15:35:32.459197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.459219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.459227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 24826.50 IOPS, 96.98 MiB/s [2024-11-20T14:35:32.517Z] [2024-11-20 15:35:32.471655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b22370) 00:26:28.609 [2024-11-20 15:35:32.471676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.609 [2024-11-20 15:35:32.471684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.609 00:26:28.609 Latency(us) 00:26:28.609 [2024-11-20T14:35:32.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.609 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:28.609 nvme0n1 : 2.00 24849.99 97.07 0.00 0.00 5145.54 2721.17 18692.01 00:26:28.609 [2024-11-20T14:35:32.517Z] =================================================================================================================== 00:26:28.609 [2024-11-20T14:35:32.517Z] Total : 24849.99 97.07 0.00 0.00 5145.54 2721.17 18692.01 00:26:28.609 { 00:26:28.609 "results": [ 00:26:28.609 { 00:26:28.609 "job": "nvme0n1", 00:26:28.609 "core_mask": "0x2", 00:26:28.609 "workload": "randread", 00:26:28.609 "status": "finished", 00:26:28.609 "queue_depth": 128, 00:26:28.609 "io_size": 4096, 00:26:28.609 "runtime": 2.00326, 00:26:28.609 "iops": 24849.99450895041, 00:26:28.609 "mibps": 97.07029105058754, 00:26:28.609 "io_failed": 0, 00:26:28.609 "io_timeout": 0, 00:26:28.609 "avg_latency_us": 5145.543159455808, 00:26:28.609 "min_latency_us": 2721.168695652174, 00:26:28.609 "max_latency_us": 18692.006956521738 00:26:28.609 } 00:26:28.609 ], 00:26:28.609 "core_count": 1 00:26:28.609 } 00:26:28.609 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.609 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.609 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.609 | .driver_specific 00:26:28.609 | .nvme_error 00:26:28.609 | .status_code 00:26:28.609 | .command_transient_transport_error' 00:26:28.609 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2309793 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2309793 ']' 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2309793 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309793 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309793' 00:26:28.868 killing process with pid 2309793 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2309793 00:26:28.868 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.868 00:26:28.868 Latency(us) 00:26:28.868 [2024-11-20T14:35:32.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.868 [2024-11-20T14:35:32.776Z] =================================================================================================================== 00:26:28.868 [2024-11-20T14:35:32.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.868 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2309793 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2310466 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2310466 /var/tmp/bperf.sock 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2310466 ']' 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.127 15:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.127 [2024-11-20 15:35:32.954417] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:29.127 [2024-11-20 15:35:32.954466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310466 ] 00:26:29.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.127 Zero copy mechanism will not be used. 00:26:29.127 [2024-11-20 15:35:33.030199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.386 [2024-11-20 15:35:33.070891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.386 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.386 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:29.386 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.386 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.645 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.904 nvme0n1 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.904 15:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.904 Zero copy mechanism will not be used. 00:26:29.904 Running I/O for 2 seconds... 00:26:29.904 [2024-11-20 15:35:33.799592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:29.904 [2024-11-20 15:35:33.799627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.904 [2024-11-20 15:35:33.799638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.904 [2024-11-20 15:35:33.805347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:29.904 [2024-11-20 15:35:33.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.904 [2024-11-20 15:35:33.805386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.163 [2024-11-20 15:35:33.811256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.163 [2024-11-20 15:35:33.811283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.811292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.816635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.816659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.816667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.821821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.826873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.826900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.826909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.832035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.832059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.832067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.837279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.837302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.837310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.842472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.842495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.847766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.847787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.847796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.853002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.853034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.858252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.858275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.858283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.863520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.863544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.863552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.868913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.874170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.874202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.879586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.879609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.879618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.884799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.884821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.884830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.889991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.890013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.895220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.895242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.895250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.900403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.900431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.900440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.905618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.905640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.905648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.910808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.910831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.910839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.916101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.916121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.916133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.921282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.921304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.921312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.926477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.926498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.926506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.164 [2024-11-20 15:35:33.931632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.164 [2024-11-20 15:35:33.931654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.164 [2024-11-20 15:35:33.931662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.936826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.936859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.942085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.942107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.942115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.947283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.947306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.952459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.952482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.952491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.958107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.958130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.958139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.963935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.963969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.963977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.969155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.969177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.969185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.974391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.974413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.974421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.979682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.979703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.979711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.984979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.985001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.985009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.990212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.990234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:33.995451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:33.995473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:33.995481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.000693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.000715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.000724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.005911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.005933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.005941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.011197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.011219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.011227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.016419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.016441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.016449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.021720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.021742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.021750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.026908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.026930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.032204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.032226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.032234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.037428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.037450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.037458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.042699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.042722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.042730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.047923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.047953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.047962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.053196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.053219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.165 [2024-11-20 15:35:34.053230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.165 [2024-11-20 15:35:34.058505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.165 [2024-11-20 15:35:34.058527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.166 [2024-11-20 15:35:34.058536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.166 [2024-11-20 15:35:34.063893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.166 [2024-11-20 15:35:34.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.166 [2024-11-20 15:35:34.063925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.425 [2024-11-20 15:35:34.069145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.425 [2024-11-20 15:35:34.069168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.425 [2024-11-20 15:35:34.069176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.074470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.074492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.074501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.079761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.079783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.079792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.085011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.085031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.085039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.090161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.090182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.090190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.095366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.095388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.095395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.100571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.100597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.100606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.105853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.105875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.105883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.111085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.111114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.116289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.116311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.116320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.121486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.121508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.121516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.126726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.126748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.126757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.131954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.131985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.137134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.137156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.137164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.142429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.147676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.147698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.152883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.152905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.152913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.158170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.158192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.158200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.163378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.163400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.163408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.168602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.168625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.168633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.172074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.172096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.176320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.176342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.176350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.181491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.186746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.186768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.186780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.192711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.192733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.192742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.198205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.198228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.198236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.203432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.426 [2024-11-20 15:35:34.203455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.426 [2024-11-20 15:35:34.203464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.426 [2024-11-20 15:35:34.208637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.208659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.208667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.213902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.213924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.213932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.219179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.219201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.219210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.224408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.224430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.224438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.229642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.229664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.229672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.234874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.234897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.234905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.240128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.240149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.240157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.245351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.245372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.245380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.250553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.250574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.250582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.255730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.255751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.260997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.261018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.261027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.266199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.266221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.266229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.271442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.271464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.271472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.276660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.276681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.281813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.281834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.281842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.287113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.287133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.287141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.292360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.292381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.292389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.297591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.297620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.302851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.302872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.302880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.308098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.308119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.308127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.313419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.313442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.313450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.318794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.318816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.318825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.324034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.324059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.427 [2024-11-20 15:35:34.329299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.427 [2024-11-20 15:35:34.329321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.427 [2024-11-20 15:35:34.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.334553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.334574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.334582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.339826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.339847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.339855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.345090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.345111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.345119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.350297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.350318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.350326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.355534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.355555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.355563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.360780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.360810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.366012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.366034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.366042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.371173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.371194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.376369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.376391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.376399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.381643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.381664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.381671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.386858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.386880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.392122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.392143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.392151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.397409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.397430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.397437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.402633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.402655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.688 [2024-11-20 15:35:34.402663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.688 [2024-11-20 15:35:34.407901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.688 [2024-11-20 15:35:34.407922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.407930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.413159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.413180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.413191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.418802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.418824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.418832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.424820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.424842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.424851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.430115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.430136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.430145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.435359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.435380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.435388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.440582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.440603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.440611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.445829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.445850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.445859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.450311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.450332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.450340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.455486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.455508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.455516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.460612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.460637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.460646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.465659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.465681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.465689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.471189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.471212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.471220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.476727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.476749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.476757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.481946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.481972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.481980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.487202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.487224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.487232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.492203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.492225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.492233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.497461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.497483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.497491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.502660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.502681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.502689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.507654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.507675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.507683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.512148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.512168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.512176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.515318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.515338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.515346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.520487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.520507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.520516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.525440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.525469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.530602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.530623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.530632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.535648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.535669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.535677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.540737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.540759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.689 [2024-11-20 15:35:34.540767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.689 [2024-11-20 15:35:34.545946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.689 [2024-11-20 15:35:34.545975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.545986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.551186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.551207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.551215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.556444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.556466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.556475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.561739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.561760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.561768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.567051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.567074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.567083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.572395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.572418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.577754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.577784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.582914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.582935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.582943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.690 [2024-11-20 15:35:34.588239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.690 [2024-11-20 15:35:34.588259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.690 [2024-11-20 15:35:34.588267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.949 [2024-11-20 15:35:34.593480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.949 [2024-11-20 15:35:34.593501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.949 [2024-11-20 15:35:34.593509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.949 [2024-11-20 15:35:34.598787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.949 [2024-11-20 15:35:34.598808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.949 [2024-11-20 15:35:34.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.949 [2024-11-20 15:35:34.604089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.949 [2024-11-20 15:35:34.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.949 [2024-11-20 15:35:34.604117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.949 [2024-11-20 15:35:34.609250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.949 [2024-11-20 15:35:34.609271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.949 [2024-11-20 15:35:34.609279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.614559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.614580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.619865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.619886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.619894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.625078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.625099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.625107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.630309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.630331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.630339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.635565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.635587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.640847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.640868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.640876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.646119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.646140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.646148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.651357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.651377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.651385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.656638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.656660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.661941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.661967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.661975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.667613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.667634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.667643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.674275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.674296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.674304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.681556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.681577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.681585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.688490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.688518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.688527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.696165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.696189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.703716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.703739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.703747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.710692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.710715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.710724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.716301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.716332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.721706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.721728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.727024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.727046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.732181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.732204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.732212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.738121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.738142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.738151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.743546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.743568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.743577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.748856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.748878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.748886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.754206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.754227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.754235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.759464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.950 [2024-11-20 15:35:34.759485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.950 [2024-11-20 15:35:34.759493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.950 [2024-11-20 15:35:34.764790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.764811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.764819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.770139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.770161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.770170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.775616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.775639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.775647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.781052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.781073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.781082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.786495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.786516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.791995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.792016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.792023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.951 5802.00 IOPS, 725.25 MiB/s [2024-11-20T14:35:34.859Z] [2024-11-20 15:35:34.799232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.799255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.799263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.804779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.804800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.804809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.810674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.810697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.810705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.816252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.816273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.816281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.821734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.821755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.821764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.827317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.827340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.827349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.832782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.832804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.832812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.838294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.838320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.843692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.843713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.843722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.951 [2024-11-20 15:35:34.849126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:30.951 [2024-11-20 15:35:34.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.951 [2024-11-20 15:35:34.849155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.854690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.854712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.854721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.859860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.859883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.859892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.865375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.865397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.865405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.870340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.870361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.870370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.875536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.875557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.875565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.880877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.880899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.880907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.886273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.886295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.886303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.891735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.891758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.891766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.897117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.897139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.897147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.902541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.902562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.902570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.908007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.908028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.908038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.913541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.913571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.211 [2024-11-20 15:35:34.918988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.211 [2024-11-20 15:35:34.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.211 [2024-11-20 15:35:34.919018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.924712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.924734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.924743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.930107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.930129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.935573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.935595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.935603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.941063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.941085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.941093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.946673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.946695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.946703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.952062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.952084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.952092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.957390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.957411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.957419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.962835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.962856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.962863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.968348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.968370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.968378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.973881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.973903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.973911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.979424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.979450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.979458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.984757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.984778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.984786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.990270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.990291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.990300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.993558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.993579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.993587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:34.999111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:34.999132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:34.999140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.003856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.003878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.003886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.009005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.009027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.009035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.014294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.014315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.014323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.019797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.019819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.019827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.025245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.025266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.025274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.030626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.030656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.212 [2024-11-20 15:35:35.036267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.212 [2024-11-20 15:35:35.036289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.212 [2024-11-20 15:35:35.036297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.041906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.041928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.041936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.048157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.048179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.048187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.053782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.053812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.059525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.059546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.059554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.064900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.064921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.064929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.070315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.070337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.075833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.075863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.081543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.081565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.081575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.086898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.086919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.086927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.092446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.092468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.092477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.097960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.097982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.097990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.103629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.103651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.103659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.109154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.109176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.109184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.213 [2024-11-20 15:35:35.114809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.213 [2024-11-20 15:35:35.114831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.213 [2024-11-20 15:35:35.114839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.120162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.120191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.125896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.125917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.125926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.132168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.132190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.132199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.138819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.138842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.138850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.146498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.146521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.146529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.153062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.153085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.159639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.159669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.165630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.165651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.165659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.172972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.172995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.173008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.180326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.180348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.180357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.187226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.187248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.474 [2024-11-20 15:35:35.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.474 [2024-11-20 15:35:35.193759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.474 [2024-11-20 15:35:35.193780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.193788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.200561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.200583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.200593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.208249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.208272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.208281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.215820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.215844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.215854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.223775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.223798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.223807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.231130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.231165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.231174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.237743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.237771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.237780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.243622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.243646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.243654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.250041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.250064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.250073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.255520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.255550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.261032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.261054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.261062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.266566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.266588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.266596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.271377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.271400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.271408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.274528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.274550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.274559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.279417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.279439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.279448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.285175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.285207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.290799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.290822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.290830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.296470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.296492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.296502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.301935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.301963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.301974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.307365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.307385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.307393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.312752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.312773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.312781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.318227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.318249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.318257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.323468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.323490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.328775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.328798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.328812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.334091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.334112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.334120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.339386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.339407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.339415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.344786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.344808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.475 [2024-11-20 15:35:35.344816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.475 [2024-11-20 15:35:35.350157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.475 [2024-11-20 15:35:35.350179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.350188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.476 [2024-11-20 15:35:35.355592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.476 [2024-11-20 15:35:35.355614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.355623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.476 [2024-11-20 15:35:35.361091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.476 [2024-11-20 15:35:35.361112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.361121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.476 [2024-11-20 15:35:35.366692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.476 [2024-11-20 15:35:35.366714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.366722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.476 [2024-11-20 15:35:35.372209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.476 [2024-11-20 15:35:35.372230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.372238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.476 [2024-11-20 15:35:35.377611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.476 [2024-11-20 15:35:35.377636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.476 [2024-11-20 15:35:35.377644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.383180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.388754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.388776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.388785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.394319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.394342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.394350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.399854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.399877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.399885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.405471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.410903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.410925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.736 [2024-11-20 15:35:35.410935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.736 [2024-11-20 15:35:35.416368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.736 [2024-11-20 15:35:35.416390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.416398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.421677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.421699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.421707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.427068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.427089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.427097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.432426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.432448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.432456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.437624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.437645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.437654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.443254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.443276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.443284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.448624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.448646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.448654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.454125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.454146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.454154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.459685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.459708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.459716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.465263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.465286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.470896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.470918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.470930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.476908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.476930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.476938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.482407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.482428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.482439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.487874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.487896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.487904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.493247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.493268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.493276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.498605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.498627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.498635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.504069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.504090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.504099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.509524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.509545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.509553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.514989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.515010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.515018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.520424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.525977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.737 [2024-11-20 15:35:35.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.737 [2024-11-20 15:35:35.526007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.737 [2024-11-20 15:35:35.531509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.531530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.531538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.536900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.536921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.536929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.542358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.542379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.542387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.547819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.547840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.547849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.553205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.553227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.558585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.558607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.558615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.563924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.563965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.569400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.569421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.569429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.574754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.574775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.574783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.580123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.580144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.580154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.585441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.585463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.585471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.590769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.590791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.590799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.596152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.596173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.596181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.601512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.601533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.601541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.606980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.607001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.607010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.612877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.612903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.612912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.618462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.618483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.618492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.623866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.623887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.623896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.629398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.629430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.634966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.634989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.634997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.738 [2024-11-20 15:35:35.640434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.738 [2024-11-20 15:35:35.640456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.738 [2024-11-20 15:35:35.640464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.645983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.646006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.651634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.651654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.657127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.657149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.662463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.662485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.662494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.667717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.667739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.667747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.673071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.673092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.673100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.678286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.678309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.678317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.683603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.683625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.683633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.688973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.688994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.689003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.694439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.998 [2024-11-20 15:35:35.694460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.998 [2024-11-20 15:35:35.694468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.998 [2024-11-20 15:35:35.700060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.700082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.700090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.705654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.705676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.705688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.711251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.711273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.711282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.716743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.716765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.716773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.722133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.722155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.722163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.727567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.727589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.727597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.732969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.732990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.732998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.738429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.738458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.743748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.743770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.743777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.749933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.749961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.749970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.755629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.755654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.755663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.761250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.761273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.761281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.766832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.766854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.766862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.772415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.772437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.772445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.778050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.778079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.783745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.783767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.783775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.789326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.789347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.789356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.999 [2024-11-20 15:35:35.794811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x937580) 00:26:31.999 [2024-11-20 15:35:35.794834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.999 [2024-11-20 15:35:35.794842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.999 5674.50 IOPS, 709.31 MiB/s 00:26:31.999 Latency(us) 00:26:31.999 [2024-11-20T14:35:35.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.999 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:31.999 nvme0n1 : 2.00 5673.07 709.13 0.00 0.00 2817.66 637.55 11910.46 00:26:31.999 [2024-11-20T14:35:35.907Z] =================================================================================================================== 00:26:31.999 [2024-11-20T14:35:35.907Z] Total : 5673.07 709.13 0.00 0.00 2817.66 637.55 11910.46 00:26:31.999 { 00:26:31.999 "results": [ 00:26:31.999 { 00:26:31.999 "job": "nvme0n1", 00:26:31.999 "core_mask": "0x2", 00:26:31.999 "workload": "randread", 00:26:31.999 "status": "finished", 00:26:31.999 "queue_depth": 16, 00:26:31.999 "io_size": 131072, 00:26:31.999 "runtime": 2.003323, 00:26:31.999 "iops": 5673.074187237904, 00:26:31.999 "mibps": 709.134273404738, 00:26:31.999 "io_failed": 0, 00:26:31.999 "io_timeout": 0, 00:26:31.999 "avg_latency_us": 2817.6593896593276, 00:26:31.999 "min_latency_us": 637.5513043478261, 00:26:31.999 "max_latency_us": 11910.455652173912 00:26:31.999 } 00:26:31.999 ], 00:26:31.999 "core_count": 1 00:26:31.999 } 00:26:31.999 15:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.999 15:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.999 15:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.999 15:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.999 | .driver_specific 00:26:31.999 | .nvme_error 00:26:31.999 | .status_code 00:26:31.999 | .command_transient_transport_error' 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2310466 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2310466 ']' 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2310466 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310466 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310466' 00:26:32.258 killing process with pid 2310466 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2310466 00:26:32.258 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.258 00:26:32.258 Latency(us) 00:26:32.258 [2024-11-20T14:35:36.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.258 [2024-11-20T14:35:36.166Z] =================================================================================================================== 00:26:32.258 [2024-11-20T14:35:36.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.258 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2310466 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2310940 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2310940 /var/tmp/bperf.sock 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2310940 ']' 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.517 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.517 [2024-11-20 15:35:36.275941] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:32.517 [2024-11-20 15:35:36.275998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310940 ] 00:26:32.517 [2024-11-20 15:35:36.353397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.517 [2024-11-20 15:35:36.395878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.775 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.775 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.776 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.776 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.776 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.776 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.776 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.034 nvme0n1 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.034 15:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.294 Running I/O for 2 seconds... 00:26:33.294 [2024-11-20 15:35:37.043490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:33.294 [2024-11-20 15:35:37.044424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.044453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.054220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ea680 00:26:33.294 [2024-11-20 15:35:37.055555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.055579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.063717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2510 00:26:33.294 [2024-11-20 15:35:37.065124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.065144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.070544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0350 00:26:33.294 [2024-11-20 15:35:37.071271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.071291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.080325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0350 00:26:33.294 [2024-11-20 15:35:37.081097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.081116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.090696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0350 00:26:33.294 [2024-11-20 15:35:37.091899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.091918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.100447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f8e88 00:26:33.294 [2024-11-20 15:35:37.101855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.101875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.109089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee190 00:26:33.294 [2024-11-20 15:35:37.110129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.110149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.119410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e9e10 00:26:33.294 [2024-11-20 15:35:37.120882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.120901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.125902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fd208 00:26:33.294 [2024-11-20 15:35:37.126558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.126577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.137723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166df988 00:26:33.294 [2024-11-20 15:35:37.139187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.139206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.144178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e0630 00:26:33.294 [2024-11-20 15:35:37.144801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.144820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.152859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6020 00:26:33.294 [2024-11-20 15:35:37.153488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.162523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:33.294 [2024-11-20 15:35:37.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.163306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.172738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e49b0 00:26:33.294 [2024-11-20 15:35:37.173611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.173631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.181902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e38d0 00:26:33.294 [2024-11-20 15:35:37.182830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.294 [2024-11-20 15:35:37.182850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.294 [2024-11-20 15:35:37.191259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e27f0 00:26:33.294 [2024-11-20 15:35:37.192154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.295 [2024-11-20 15:35:37.192173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.553 [2024-11-20 15:35:37.200627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166de8a8 00:26:33.553 [2024-11-20 15:35:37.201481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.553 [2024-11-20 15:35:37.201503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.553 [2024-11-20 15:35:37.209900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef6a8 00:26:33.554 [2024-11-20 15:35:37.210859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.210878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.219527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f7970 00:26:33.554 [2024-11-20 15:35:37.220514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.220533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.229007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5220 00:26:33.554 [2024-11-20 15:35:37.229994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.230013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.238149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fc998 00:26:33.554 [2024-11-20 15:35:37.239183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.239201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.247319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fda78 00:26:33.554 [2024-11-20 15:35:37.248335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.248354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.256492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e8d30 00:26:33.554 [2024-11-20 15:35:37.257505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.257524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.265737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f35f0 00:26:33.554 [2024-11-20 15:35:37.266748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.266767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.274893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eea00 00:26:33.554 [2024-11-20 15:35:37.275942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.275965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.284085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166edd58 00:26:33.554 [2024-11-20 15:35:37.285130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.285149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.293302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0788 00:26:33.554 [2024-11-20 15:35:37.294312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.294330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.302690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fb8b8 00:26:33.554 [2024-11-20 15:35:37.303718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.303737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.311129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ebfd0 00:26:33.554 [2024-11-20 15:35:37.312358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.312377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.319604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5ec8 00:26:33.554 [2024-11-20 15:35:37.320186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.328815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa7d8 00:26:33.554 [2024-11-20 15:35:37.329463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.329483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.337395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.337942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.337964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.347132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.347750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.347769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.356252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.356885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.356903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.365495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.366134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.366153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.374622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.375282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.375301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.383743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.384381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.384401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.392875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.393564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.393582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.402031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:33.554 [2024-11-20 15:35:37.402670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.402689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.413258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ebb98 00:26:33.554 [2024-11-20 15:35:37.414469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.414488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.420770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ff3c8 00:26:33.554 [2024-11-20 15:35:37.421515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.421534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.431162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5658 00:26:33.554 [2024-11-20 15:35:37.432368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.432387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.440712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ecc78 00:26:33.554 [2024-11-20 15:35:37.442044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.442068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:33.554 [2024-11-20 15:35:37.450024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:33.554 [2024-11-20 15:35:37.451343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.554 [2024-11-20 15:35:37.451361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.555 [2024-11-20 15:35:37.457770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f1430 00:26:33.555 [2024-11-20 15:35:37.458424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.555 [2024-11-20 15:35:37.458443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.467507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5658 00:26:33.834 [2024-11-20 15:35:37.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.476908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f8e88 00:26:33.834 [2024-11-20 15:35:37.477893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.477912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.486123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fac10 00:26:33.834 [2024-11-20 15:35:37.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.487118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.495330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fbcf0 00:26:33.834 [2024-11-20 15:35:37.496310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.496329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.504509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ed0b0 00:26:33.834 [2024-11-20 15:35:37.505508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.505526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.513678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:33.834 [2024-11-20 15:35:37.514658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.514677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.522840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e8088 00:26:33.834 [2024-11-20 15:35:37.523816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.523835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.532158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f7100 00:26:33.834 [2024-11-20 15:35:37.533137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.533156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.541296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6890 00:26:33.834 [2024-11-20 15:35:37.542273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.550449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5220 00:26:33.834 [2024-11-20 15:35:37.551432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.551450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.559885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa3a0 00:26:33.834 [2024-11-20 15:35:37.560844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.560863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.569379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f81e0 00:26:33.834 [2024-11-20 15:35:37.570171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.570190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.578015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eea00 00:26:33.834 [2024-11-20 15:35:37.579404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.579423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.834 [2024-11-20 15:35:37.586491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1f80 00:26:33.834 [2024-11-20 15:35:37.587225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.834 [2024-11-20 15:35:37.587244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.595942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e0630 00:26:33.835 [2024-11-20 15:35:37.596787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.596806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.605309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e0ea0 00:26:33.835 [2024-11-20 15:35:37.606189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.614456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6fa8 00:26:33.835 [2024-11-20 15:35:37.615328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.615346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.623629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f7970 00:26:33.835 [2024-11-20 15:35:37.624499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.624519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.632850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6890 00:26:33.835 [2024-11-20 15:35:37.633719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.633740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.641986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5220 00:26:33.835 [2024-11-20 15:35:37.642849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.642868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.651190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa3a0 00:26:33.835 [2024-11-20 15:35:37.652061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.652080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.660354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f57b0 00:26:33.835 [2024-11-20 15:35:37.661224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.661243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.669590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e7818 00:26:33.835 [2024-11-20 15:35:37.670485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.670504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.678698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166de470 00:26:33.835 [2024-11-20 15:35:37.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.687872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef270 00:26:33.835 [2024-11-20 15:35:37.688743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.688763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.697129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fcdd0 00:26:33.835 [2024-11-20 15:35:37.697991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.698010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.706395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fbcf0 00:26:33.835 [2024-11-20 15:35:37.707268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.707287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:33.835 [2024-11-20 15:35:37.715733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fac10 00:26:33.835 [2024-11-20 15:35:37.716656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.835 [2024-11-20 15:35:37.716677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.726573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f8e88 00:26:34.171 [2024-11-20 15:35:37.727935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.727960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.735182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa3a0 00:26:34.171 [2024-11-20 15:35:37.736614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.736635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.743295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1b48 00:26:34.171 [2024-11-20 15:35:37.744035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.744054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.752853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6fa8 00:26:34.171 [2024-11-20 15:35:37.753626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.753645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.762723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5ec8 00:26:34.171 [2024-11-20 15:35:37.763462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.763482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.772532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ecc78 00:26:34.171 [2024-11-20 15:35:37.773396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.781979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fd208 00:26:34.171 [2024-11-20 15:35:37.782852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.782871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.791199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eb328 00:26:34.171 [2024-11-20 15:35:37.792108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.800357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e49b0 00:26:34.171 [2024-11-20 15:35:37.801238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.809758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e38d0 00:26:34.171 [2024-11-20 15:35:37.810654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.818960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2510 00:26:34.171 [2024-11-20 15:35:37.819837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.819855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.827501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e99d8 00:26:34.171 [2024-11-20 15:35:37.828354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.828373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.837119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef6a8 00:26:34.171 [2024-11-20 15:35:37.838021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.838040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.848067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef6a8 00:26:34.171 [2024-11-20 15:35:37.849499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.849519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.857633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee5c8 00:26:34.171 [2024-11-20 15:35:37.859235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.864396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:34.171 [2024-11-20 15:35:37.865263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.865283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.875769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2510 00:26:34.171 [2024-11-20 15:35:37.877322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.877342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.882664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e99d8 00:26:34.171 [2024-11-20 15:35:37.883335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.883354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.894011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e7c50 00:26:34.171 [2024-11-20 15:35:37.895138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.895157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.902538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e38d0 00:26:34.171 [2024-11-20 15:35:37.903227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.903246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.913904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0bc0 00:26:34.171 [2024-11-20 15:35:37.915487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.915506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.920558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e12d8 00:26:34.171 [2024-11-20 15:35:37.921347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.921371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.929846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef6a8 00:26:34.171 [2024-11-20 15:35:37.930634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.171 [2024-11-20 15:35:37.930653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:34.171 [2024-11-20 15:35:37.940859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fac10 00:26:34.171 [2024-11-20 15:35:37.942041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.942070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.949568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2d80 00:26:34.172 [2024-11-20 15:35:37.950711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.959132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eaab8 00:26:34.172 [2024-11-20 15:35:37.960410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.968725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e27f0 00:26:34.172 [2024-11-20 15:35:37.970147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.970165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.978088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ebb98 00:26:34.172 [2024-11-20 15:35:37.979475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.979494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.984433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ecc78 00:26:34.172 [2024-11-20 15:35:37.985104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.985122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:37.994632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6020 00:26:34.172 [2024-11-20 15:35:37.995347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:37.995366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:38.004346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:34.172 [2024-11-20 15:35:38.005404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.005426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:38.013799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e12d8 00:26:34.172 [2024-11-20 15:35:38.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.014422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:38.023548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f1868 00:26:34.172 [2024-11-20 15:35:38.024278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.024298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:38.032475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e73e0 00:26:34.172 [2024-11-20 15:35:38.035127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.035147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.172 27512.00 IOPS, 107.47 MiB/s [2024-11-20T14:35:38.080Z] [2024-11-20 15:35:38.044247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f8618 00:26:34.172 [2024-11-20 15:35:38.045769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.045789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:34.172 [2024-11-20 15:35:38.050998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e73e0 00:26:34.172 [2024-11-20 15:35:38.051835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.172 [2024-11-20 15:35:38.051854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.062499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6458 00:26:34.432 [2024-11-20 15:35:38.063674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.063693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.071899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:34.432 [2024-11-20 15:35:38.073006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.073025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.080658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6fa8 00:26:34.432 [2024-11-20 15:35:38.081666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.081685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.089884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166de8a8 00:26:34.432 [2024-11-20 15:35:38.090982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.091001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.099497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fbcf0 00:26:34.432 [2024-11-20 15:35:38.100115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.100135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.432 [2024-11-20 15:35:38.108629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f4f40 00:26:34.432 [2024-11-20 15:35:38.109428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.432 [2024-11-20 15:35:38.109447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.117343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f5be8 00:26:34.433 [2024-11-20 15:35:38.118142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.118161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.126927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ec408 00:26:34.433 [2024-11-20 15:35:38.127911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.127930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.138444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f46d0 00:26:34.433 [2024-11-20 15:35:38.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.139844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.145129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fd640 00:26:34.433 [2024-11-20 15:35:38.145764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.145783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.154920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e27f0 00:26:34.433 [2024-11-20 15:35:38.155561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.164353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ebfd0 00:26:34.433 [2024-11-20 15:35:38.165218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.165241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.174250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e88f8 00:26:34.433 [2024-11-20 15:35:38.175338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.175357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.183913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fc998 00:26:34.433 [2024-11-20 15:35:38.185154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.185174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.193328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fc560 00:26:34.433 [2024-11-20 15:35:38.194093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.194112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.202692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f7100 00:26:34.433 [2024-11-20 15:35:38.203699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.203718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.211779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f7538 00:26:34.433 [2024-11-20 15:35:38.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.212797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.220195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166feb58 00:26:34.433 [2024-11-20 15:35:38.221294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.221313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.229749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f3e60 00:26:34.433 [2024-11-20 15:35:38.230924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.230943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.239516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fcdd0 00:26:34.433 [2024-11-20 15:35:38.240823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.240842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.248685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e12d8 00:26:34.433 [2024-11-20 15:35:38.249685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.249704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.257733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e12d8 00:26:34.433 [2024-11-20 15:35:38.258729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.258748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.266347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fc998 00:26:34.433 [2024-11-20 15:35:38.267409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.267428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.277596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ec408 00:26:34.433 [2024-11-20 15:35:38.279146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.279165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.284045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e12d8 00:26:34.433 [2024-11-20 15:35:38.284685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.293305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e8d30 00:26:34.433 [2024-11-20 15:35:38.293953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.293973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.301824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f3e60 00:26:34.433 [2024-11-20 15:35:38.302540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.433 [2024-11-20 15:35:38.302559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:34.433 [2024-11-20 15:35:38.313407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6fa8 00:26:34.434 [2024-11-20 15:35:38.314642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.434 [2024-11-20 15:35:38.314661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.434 [2024-11-20 15:35:38.323308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f1430 00:26:34.434 [2024-11-20 15:35:38.324660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.434 [2024-11-20 15:35:38.324679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.434 [2024-11-20 15:35:38.332919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fe2e8 00:26:34.434 [2024-11-20 15:35:38.334216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.434 [2024-11-20 15:35:38.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.340749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6738 00:26:34.694 [2024-11-20 15:35:38.341655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.341674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.350614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e0630 00:26:34.694 [2024-11-20 15:35:38.351648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.351667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.361713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e0630 00:26:34.694 [2024-11-20 15:35:38.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.363424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.368537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ec408 00:26:34.694 [2024-11-20 15:35:38.369423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.369446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.379773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0350 00:26:34.694 [2024-11-20 15:35:38.380992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.381011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.387516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e3498 00:26:34.694 [2024-11-20 15:35:38.388041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.388059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.397892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fc128 00:26:34.694 [2024-11-20 15:35:38.399114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.399133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.406738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fdeb0 00:26:34.694 [2024-11-20 15:35:38.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.408000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.416028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e9e10 00:26:34.694 [2024-11-20 15:35:38.416781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.416801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.426585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f31b8 00:26:34.694 [2024-11-20 15:35:38.428162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.428180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.433082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f5be8 00:26:34.694 [2024-11-20 15:35:38.433841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.433860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.441820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e8088 00:26:34.694 [2024-11-20 15:35:38.442583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.451431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f3a28 00:26:34.694 [2024-11-20 15:35:38.452279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.452298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.462562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fe2e8 00:26:34.694 [2024-11-20 15:35:38.463836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.463856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.470420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fd640 00:26:34.694 [2024-11-20 15:35:38.470965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.470984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.480046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eff18 00:26:34.694 [2024-11-20 15:35:38.480691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.480710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.489623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eea00 00:26:34.694 [2024-11-20 15:35:38.490395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.694 [2024-11-20 15:35:38.490419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:34.694 [2024-11-20 15:35:38.499052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f3a28 00:26:34.694 [2024-11-20 15:35:38.500074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.500093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.507437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ecc78 00:26:34.695 [2024-11-20 15:35:38.508443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.508461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.517041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ee190 00:26:34.695 [2024-11-20 15:35:38.518239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.518258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.526588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e01f8 00:26:34.695 [2024-11-20 15:35:38.527916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.527935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.535110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eff18 00:26:34.695 [2024-11-20 15:35:38.536481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.536500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.543204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ecc78 00:26:34.695 [2024-11-20 15:35:38.543975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.554531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f9b30 00:26:34.695 [2024-11-20 15:35:38.555674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.555693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.563705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f4b08 00:26:34.695 [2024-11-20 15:35:38.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.564819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.573160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166df988 00:26:34.695 [2024-11-20 15:35:38.574178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.583067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2510 00:26:34.695 [2024-11-20 15:35:38.584336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.584355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:34.695 [2024-11-20 15:35:38.591754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e3060 00:26:34.695 [2024-11-20 15:35:38.592628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.695 [2024-11-20 15:35:38.592647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.602862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e27f0 00:26:34.954 [2024-11-20 15:35:38.604406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.604426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.609580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6b70 00:26:34.954 [2024-11-20 15:35:38.610343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.610361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.618867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f20d8 00:26:34.954 [2024-11-20 15:35:38.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.619704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.630465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ddc00 00:26:34.954 [2024-11-20 15:35:38.631850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.631869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.638989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166df988 00:26:34.954 [2024-11-20 15:35:38.639911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.639930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.647687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f4298 00:26:34.954 [2024-11-20 15:35:38.648679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.648698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.656802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166df988 00:26:34.954 [2024-11-20 15:35:38.657694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.657713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.666438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f3a28 00:26:34.954 [2024-11-20 15:35:38.667497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.667516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.676970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5658 00:26:34.954 [2024-11-20 15:35:38.678462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.678480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.683416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eaab8 00:26:34.954 [2024-11-20 15:35:38.683996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.684015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.692993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166edd58 00:26:34.954 [2024-11-20 15:35:38.693686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.693704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.702669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa3a0 00:26:34.954 [2024-11-20 15:35:38.703724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.703743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.714089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ea248 00:26:34.954 [2024-11-20 15:35:38.715652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.715671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.954 [2024-11-20 15:35:38.720641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f96f8 00:26:34.954 [2024-11-20 15:35:38.721355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.954 [2024-11-20 15:35:38.721374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.732123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ff3c8 00:26:34.955 [2024-11-20 15:35:38.733536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.733557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.738918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6fa8 00:26:34.955 [2024-11-20 15:35:38.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.739669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.748619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef270 00:26:34.955 [2024-11-20 15:35:38.749425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.749445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.758204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f6890 00:26:34.955 [2024-11-20 15:35:38.759132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.759151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.767838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f2d80 00:26:34.955 [2024-11-20 15:35:38.768887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.768906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.777519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fbcf0 00:26:34.955 [2024-11-20 15:35:38.778695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.778714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.787129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ef270 00:26:34.955 [2024-11-20 15:35:38.788417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.788436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.796725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eea00 00:26:34.955 [2024-11-20 15:35:38.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.798159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.806302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e84c0 00:26:34.955 [2024-11-20 15:35:38.807847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.807867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.813148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166ea248 00:26:34.955 [2024-11-20 15:35:38.814004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.814022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.824768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e5a90 00:26:34.955 [2024-11-20 15:35:38.826103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.826121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.834525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eee38 00:26:34.955 [2024-11-20 15:35:38.835868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.835886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.843672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6300 00:26:34.955 [2024-11-20 15:35:38.845117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.845136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.955 [2024-11-20 15:35:38.850291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166fa3a0 00:26:34.955 [2024-11-20 15:35:38.850983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.955 [2024-11-20 15:35:38.851002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.860045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eff18 00:26:35.214 [2024-11-20 15:35:38.860815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.860833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.869937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eff18 00:26:35.214 [2024-11-20 15:35:38.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.870707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.880503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166eff18 00:26:35.214 [2024-11-20 15:35:38.881828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.881847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.889039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166de038 00:26:35.214 [2024-11-20 15:35:38.890354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.890373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.896904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f35f0 00:26:35.214 [2024-11-20 15:35:38.897635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.897654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.908196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f96f8 00:26:35.214 [2024-11-20 15:35:38.909431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.909450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.917561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e99d8 00:26:35.214 [2024-11-20 15:35:38.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.918349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.926110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e6b70 00:26:35.214 [2024-11-20 15:35:38.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.927603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.934141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e2c28 00:26:35.214 [2024-11-20 15:35:38.934868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.934887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.943807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f0ff8 00:26:35.214 [2024-11-20 15:35:38.944601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.944621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.953586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166de038 00:26:35.214 [2024-11-20 15:35:38.954559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.954579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.963154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166f8a50 00:26:35.214 [2024-11-20 15:35:38.964265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.964285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:35.214 [2024-11-20 15:35:38.972386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.214 [2024-11-20 15:35:38.973169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.214 [2024-11-20 15:35:38.973193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:38.981426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:38.982311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:38.982330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:38.990587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:38.991496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:38.991515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:38.999733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:39.000598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:39.000617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:39.008892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:39.009766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:39.009785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:39.018216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:39.019100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:39.019129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:39.027372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 [2024-11-20 15:35:39.028157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:39.028176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 [2024-11-20 15:35:39.036488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182a640) with pdu=0x2000166e1710 00:26:35.215 27514.00 IOPS, 107.48 MiB/s [2024-11-20T14:35:39.123Z] [2024-11-20 15:35:39.037347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.215 [2024-11-20 15:35:39.037364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.215 00:26:35.215 Latency(us) 00:26:35.215 [2024-11-20T14:35:39.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:35.215 nvme0n1 : 2.00 27506.24 107.45 0.00 0.00 4647.32 1823.61 12879.25 00:26:35.215 [2024-11-20T14:35:39.123Z] =================================================================================================================== 00:26:35.215 [2024-11-20T14:35:39.123Z] Total : 27506.24 107.45 0.00 0.00 4647.32 1823.61 12879.25 00:26:35.215 { 00:26:35.215 "results": [ 00:26:35.215 { 00:26:35.215 "job": "nvme0n1", 00:26:35.215 "core_mask": "0x2", 00:26:35.215 "workload": "randwrite", 00:26:35.215 "status": "finished", 00:26:35.215 "queue_depth": 128, 00:26:35.215 "io_size": 4096, 00:26:35.215 "runtime": 2.002891, 00:26:35.215 "iops": 27506.239730469606, 00:26:35.215 "mibps": 107.4462489471469, 00:26:35.215 "io_failed": 0, 00:26:35.215 "io_timeout": 0, 00:26:35.215 "avg_latency_us": 4647.324271368998, 00:26:35.215 "min_latency_us": 1823.6104347826088, 00:26:35.215 "max_latency_us": 12879.248695652173 00:26:35.215 } 00:26:35.215 ], 00:26:35.215 "core_count": 1 00:26:35.215 } 00:26:35.215 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.215 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.215 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.215 | .driver_specific 00:26:35.215 | .nvme_error 00:26:35.215 | .status_code 00:26:35.215 | .command_transient_transport_error' 00:26:35.215 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2310940 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2310940 ']' 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2310940 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310940 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310940' 00:26:35.486 killing process with pid 2310940 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2310940 00:26:35.486 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.486 00:26:35.486 Latency(us) 00:26:35.486 [2024-11-20T14:35:39.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.486 [2024-11-20T14:35:39.394Z] =================================================================================================================== 00:26:35.486 [2024-11-20T14:35:39.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.486 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2310940 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2311424 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2311424 /var/tmp/bperf.sock 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2311424 ']' 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.744 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.744 [2024-11-20 15:35:39.519211] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:35.744 [2024-11-20 15:35:39.519258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311424 ] 00:26:35.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.744 Zero copy mechanism will not be used. 00:26:35.744 [2024-11-20 15:35:39.592653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.744 [2024-11-20 15:35:39.630368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.002 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.002 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:36.003 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.003 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.261 15:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.520 nvme0n1 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.520 15:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.520 Zero copy mechanism will not be used. 00:26:36.520 Running I/O for 2 seconds... 00:26:36.520 [2024-11-20 15:35:40.378830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.378915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.378959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.383356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.383470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.388281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.388339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.388362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.394388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.394449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.394469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.399544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.399600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.399619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.404296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.404360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.408849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.408916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.408936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.413275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.413348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.413366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.417682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.417755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.520 [2024-11-20 15:35:40.422180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.520 [2024-11-20 15:35:40.422247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.520 [2024-11-20 15:35:40.422266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.426598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.426658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.426677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.431001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.431076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.435454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.435510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.439892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.439959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.439978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.444342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.444410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.444428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.448747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.448813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.448831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.453192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.453262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.457575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.457652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.457676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.461975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.462045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.462063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.466376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.466436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.466455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.470813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.470884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.470902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.475493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.475578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.475599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.480197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.480275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.484624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.484678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.484696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.488939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.489011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.489029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.493475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.493538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.493556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.498203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.498259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.498277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.503724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.780 [2024-11-20 15:35:40.503882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.780 [2024-11-20 15:35:40.503902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.780 [2024-11-20 15:35:40.509238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.509316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.514373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.514434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.514453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.519531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.519622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.519642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.524357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.524410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.524428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.529172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.529230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.529249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.533920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.533994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.534013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.538627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.538700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.538723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.543421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.543516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.543536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.548248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.548327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.548346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.552939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.553010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.553028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.557738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.557791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.557810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.562554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.562617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.562636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.567321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.567454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.567474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.572297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.572359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.572377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.576978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.577047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.577066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.581694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.581772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.581796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.586505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.586557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.586576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.591227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.591281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.591299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.596257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.596325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.596344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.600932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.601006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.601024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.605621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.605683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.605701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.610328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.610399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.610418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.615070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.615127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.615146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.619883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.619953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.619972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.625322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.625385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.625403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.630117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.630198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.634795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.781 [2024-11-20 15:35:40.634892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.781 [2024-11-20 15:35:40.634912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.781 [2024-11-20 15:35:40.640478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.640652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.640672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.646392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.646490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.651338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.651427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.651448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.656274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.656356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.656377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.661160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.661246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.661265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.666029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.666191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.666211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.670838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.670942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.670968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.675591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.675687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.782 [2024-11-20 15:35:40.680375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:36.782 [2024-11-20 15:35:40.680482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.782 [2024-11-20 15:35:40.680502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.685336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.685492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.690396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.690476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.690496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.695224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.695327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.695348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.700135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.700247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.700267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.704940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.705068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.705089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.709768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.709862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.709885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.714598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.714726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.714747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.719517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.719682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.719702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.724560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.724674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.729466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.729611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.729630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.734253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.734397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.734417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.739006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.739088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.739106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.743770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.743876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.743896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.748566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.748680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.748701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.753777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.753901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.753921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.759220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.759291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.759310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.763689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.763755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.768034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.768125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.772393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.772469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.772488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.777216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.777280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.042 [2024-11-20 15:35:40.777298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.042 [2024-11-20 15:35:40.782080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.042 [2024-11-20 15:35:40.782150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.782169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.786968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.787073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.787093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.791869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.791968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.791987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.796661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.796767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.796786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.801806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.802000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.802021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.807035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.807161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.807181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.812010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.812109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.812129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.817285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.817427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.817447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.823161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.823247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.823268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.830259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.830410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.835657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.835714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.835733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.840590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.840648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.840671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.845624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.845772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.845792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.850554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.850615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.855275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.855338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.855357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.860096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.860149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.860168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.864958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.865014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.865034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.869712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.869804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.874110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.874203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.879357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.879443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.879463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.885439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.885592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.885612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.891758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.891902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.891923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.897530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.897620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.897640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.904884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.904943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.904968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.043 [2024-11-20 15:35:40.910023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.043 [2024-11-20 15:35:40.910082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.043 [2024-11-20 15:35:40.910102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.914845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.914942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.914968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.919903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.919969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.919989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.924388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.924453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.924472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.929298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.929355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.929374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.934833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.934891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.934910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.044 [2024-11-20 15:35:40.940849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.044 [2024-11-20 15:35:40.940925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.044 [2024-11-20 15:35:40.940945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.946759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.946851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.946872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.951974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.952033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.952052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.956821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.956909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.956931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.962991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.963160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.963181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.969697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.969785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.969805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.975838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.975925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.975945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.982295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.982354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.982376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.988980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.989036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.989055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:40.995546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:40.995711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:40.995732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:41.002898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:41.003084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:41.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:41.009992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:41.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:41.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:41.016888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:41.017045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 15:35:41.017065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 15:35:41.024143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.304 [2024-11-20 15:35:41.024316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.024337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.030820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.030882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.030901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.036119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.036210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.041626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.041726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.041745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.047720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.047817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.047837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.053925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.054017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.054036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.059174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.059232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.059250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.063568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.063623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.063642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.067976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.068035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.068054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.072269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.072360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.076595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.076662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.076680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.080902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.080970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.085248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.085325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.085345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.089554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.089619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.089638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.093852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.093906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.093924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.098282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.098339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.098358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.102600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.102657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.102676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.107011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.107075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.107095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.111427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.111492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.111511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.115740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.115801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.115820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.120079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.120139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.120161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.124443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.124502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.124520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.128743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.128801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.128820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.133052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.133109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.133127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.137342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.137429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.141800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.141967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.146713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.305 [2024-11-20 15:35:41.146812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 15:35:41.146832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 15:35:41.151802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.151856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.151875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.157241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.157303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.157322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.162861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.162914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.162933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.168249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.168315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.168336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.172987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.173056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.173075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.177398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.177475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.177495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.181576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.181786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.181805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.185689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.185933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.185959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.189796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.190041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.190062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.193961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.194208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.194228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.198313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.198561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.198585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.202977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.203214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.203234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 15:35:41.208311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.306 [2024-11-20 15:35:41.208539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 15:35:41.208559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.213549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.213791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.213810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.217908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.218173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.222078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.222315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.222334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.226082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.226327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.226347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.230078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.234146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.234385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.234404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.238319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.238561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.238584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.243237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.243581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.243601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.249257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.249386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.249406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.255249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.255580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.255600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.260622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.260850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.260870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.265718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.266043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.271337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.271604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.276328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.276545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.276565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.280579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.280823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.280842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.285173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.285409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.285428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.289624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.289904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.289924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.293934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.294191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.567 [2024-11-20 15:35:41.294210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 15:35:41.298316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.567 [2024-11-20 15:35:41.298544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.298564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.302974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.303193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.303213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.308348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.308544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.308564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.312444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.312696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.312715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.316234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.316444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.316464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.321001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.321346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.326209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.326529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.326549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.330524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.330736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.330756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.334488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.334742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.334762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.338401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.338638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.338657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.342429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.342686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.342705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.346330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.346554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.346574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.350302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.350551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.350570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.354922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.355229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.355249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.359736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.359995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.360019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.365064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.365278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.365298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.369144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.369386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.369405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.373221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.373435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.373455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 6263.00 IOPS, 782.88 MiB/s [2024-11-20T14:35:41.476Z] [2024-11-20 15:35:41.378422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.378525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.378545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.382966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.383110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.383129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.387205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.387376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.387396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.391576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.391746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.391765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.395671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.395799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.395819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.399762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.399898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.399918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.403873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.404048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.404067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.408008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.408143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.568 [2024-11-20 15:35:41.412208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.568 [2024-11-20 15:35:41.412347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.568 [2024-11-20 15:35:41.412367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.417314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.417434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.422303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.422493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.422512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.426852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.426968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.426988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.432176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.432297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.432317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.436579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.436662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.436686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.441126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.441283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.441302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.445379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.445458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.445477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.449285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.449368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.449388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.453134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.453262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.453281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.457001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.457104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.457123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.460885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.460977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.460997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.464772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.464875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.569 [2024-11-20 15:35:41.468717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.569 [2024-11-20 15:35:41.468838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.569 [2024-11-20 15:35:41.468857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.472692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.472816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.472836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.476612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.476710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.476730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.480969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.481068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.485695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.485770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.485790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.489838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.489933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.490103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.493756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.493859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.493880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.497703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.497831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.829 [2024-11-20 15:35:41.497851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.829 [2024-11-20 15:35:41.501684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.829 [2024-11-20 15:35:41.501781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.501802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.505657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.505806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.509580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.509706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.509725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.513837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.513934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.513960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.518270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.518389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.518409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.522383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.522478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.522497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.526960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.531373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.531502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.531521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.536250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.536399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.536419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.542409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.542550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.542569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.547445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.547565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.547589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.552847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.552941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.552965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.557652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.557758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.557777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.562410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.562524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.562544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.566618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.566764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.566783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.570913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.571028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.571047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.574936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.575068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.575087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.578774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.578873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.578893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.582912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.583067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.583089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.586914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.587035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.587055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.590872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.590973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.591009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.594796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.594927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.594946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.598775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.598890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.598910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.602790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.602896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.602916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.606850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.606934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.606960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.610793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.610896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.610915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.614756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.614867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.830 [2024-11-20 15:35:41.614887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.830 [2024-11-20 15:35:41.618582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.830 [2024-11-20 15:35:41.618682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.622509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.622607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.622627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.628007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.628121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.632356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.632459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.632479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.636311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.636405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.636424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.640300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.640409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.640428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.644273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.644385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.648174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.648266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.648286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.652071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.652210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.652230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.656053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.656160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.656183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.660063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.660190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.660210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.663918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.664051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.664071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.667786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.667908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.667928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.671867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.671972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.671992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.676867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.676972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.676992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.681790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.681995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.682015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.686682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.686789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.691088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.691222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.691242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.696049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.696138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.696158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.701052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.701193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.701213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.706143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.706244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.706264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.710145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.710243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.710263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.714024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.714169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.714189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.717910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.718032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.718051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.721881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.722024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.722044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.725781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.725926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.831 [2024-11-20 15:35:41.729824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:37.831 [2024-11-20 15:35:41.729929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.831 [2024-11-20 15:35:41.729957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.734151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.734253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.734273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.738885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.739010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.743223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.743321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.743341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.747324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.747433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.747453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.751344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.751415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.751435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.755314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.755414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.755434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.759302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.759412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.759432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.763171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.763295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.763315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.767153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.767258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.767281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.771306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.771394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.771414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.776239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.776338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.781091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.781184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.781204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.785742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.785845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.785865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.790303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.790406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.790426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.794775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.794851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.794871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.799425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.799521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.799542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.803666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.092 [2024-11-20 15:35:41.803762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.092 [2024-11-20 15:35:41.803782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.092 [2024-11-20 15:35:41.808068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.808166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.808187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.812494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.812577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.812597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.817251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.817334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.817354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.821713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.821785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.821805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.825788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.825918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.829701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.829824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.829844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.833626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.833733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.833752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.837578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.837679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.837699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.841585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.841683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.841703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.845503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.845597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.845615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.849462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.849560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.849579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.853431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.853539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.853559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.857400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.857488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.857508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.861228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.861325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.861345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.864979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.865097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.865116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.868870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.868989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.869009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.873137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.873231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.873251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.877999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.878119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.878146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.883510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.883642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.883662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.889427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.889617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.889636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.896143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.896367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.896388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.902487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.902702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.909812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.909979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.910000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.916360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.093 [2024-11-20 15:35:41.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.093 [2024-11-20 15:35:41.916529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.093 [2024-11-20 15:35:41.923168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.923258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.923277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.929917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.930171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.930192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.936590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.936774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.936795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.943470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.943681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.943701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.949962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.950155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.950175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.956647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.956859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.963339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.963469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.963489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.969704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.969836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.976233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.976321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.976341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.982063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.982161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.982180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.986262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.986349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.990333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.990415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.990435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.094 [2024-11-20 15:35:41.994362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.094 [2024-11-20 15:35:41.994435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.094 [2024-11-20 15:35:41.994455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:41.998427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:41.998511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:41.998533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.002568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.002622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.002641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.006614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.006684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.006703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.010702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.010782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.010802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.014775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.014857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.014877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.018995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.019101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.022976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.023062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.023085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.026999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.027078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.027098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.032150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.032264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.032283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.038349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.038463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.038483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.044094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.044197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.044217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.049993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.050063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.354 [2024-11-20 15:35:42.050082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.354 [2024-11-20 15:35:42.055806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.354 [2024-11-20 15:35:42.055892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.055912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.062082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.062192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.062212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.068043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.068104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.074078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.074180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.074200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.079005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.079094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.079115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.083085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.083172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.083191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.087072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.087155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.087175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.091054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.091108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.091126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.095122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.095217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.099094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.099175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.099199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.103228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.103311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.103331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.107316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.107381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.107400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.111219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.111310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.115227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.115308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.115326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.119637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.119706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.119725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.124168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.124228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.124248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.128808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.128897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.128917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.133381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.133502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.133522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.137308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.141186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.141274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.141294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.145209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.145276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.145299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.149155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.149218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.149237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.153169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.153247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.153268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.157162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.157240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.157260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.161248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.161308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.165157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.165224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.165243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.169161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.355 [2024-11-20 15:35:42.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.355 [2024-11-20 15:35:42.169263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.355 [2024-11-20 15:35:42.173920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.174007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.174027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.178172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.178250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.178271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.182283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.182375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.182396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.186396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.186511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.186531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.190688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.190770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.190791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.195236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.195352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.195373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.200419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.200560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.200581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.206155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.206283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.206304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.210560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.210694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.214983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.215086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.215106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.218936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.219089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.219110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.223034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.223228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.223249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.228473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.228620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.228640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.233269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.233409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.233429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.238050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.238154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.238174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.243767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.243854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.243874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.248278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.248361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.253084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.253175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.253195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.356 [2024-11-20 15:35:42.257594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.356 [2024-11-20 15:35:42.257689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.356 [2024-11-20 15:35:42.257710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.261666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.261752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.261775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.265686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.265783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.265802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.269720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.269799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.273703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.273774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.273792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.277709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.277776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.281649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.281753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.285532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.285606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.285626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.289416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.289500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.289520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.293590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.293654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.293673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.297926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.298078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.298098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.302359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.302455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.302476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.307107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.307185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.307206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.311388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.311461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.311481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.315421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.315497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.315518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.319602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.319666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.319684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.323612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.323694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.323714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.327646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.327770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.327790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.331707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.331776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.331795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.336024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.336105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.336125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.340201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.340308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.344289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.616 [2024-11-20 15:35:42.344367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.616 [2024-11-20 15:35:42.344386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.616 [2024-11-20 15:35:42.348422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.348519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.348539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.353155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.353230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.353249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.358045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.358116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.358135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.362666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.362738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.362757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.366634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.366707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.366729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.370646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.370731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.370754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.617 [2024-11-20 15:35:42.374524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x182ab20) with pdu=0x2000166ff3c8 00:26:38.617 [2024-11-20 15:35:42.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.617 [2024-11-20 15:35:42.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:38.617 6593.50 IOPS, 824.19 MiB/s 00:26:38.617 Latency(us) 00:26:38.617 [2024-11-20T14:35:42.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.617 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:38.617 nvme0n1 : 2.00 6592.12 824.01 0.00 0.00 2423.21 1652.65 10200.82 00:26:38.617 [2024-11-20T14:35:42.525Z] =================================================================================================================== 00:26:38.617 [2024-11-20T14:35:42.525Z] Total : 6592.12 824.01 0.00 0.00 2423.21 1652.65 10200.82 00:26:38.617 { 00:26:38.617 "results": [ 00:26:38.617 { 00:26:38.617 "job": "nvme0n1", 00:26:38.617 "core_mask": "0x2", 00:26:38.617 "workload": "randwrite", 00:26:38.617 "status": "finished", 00:26:38.617 "queue_depth": 16, 00:26:38.617 "io_size": 131072, 00:26:38.617 "runtime": 2.002847, 00:26:38.617 "iops": 6592.116122699337, 00:26:38.617 "mibps": 824.0145153374172, 00:26:38.617 "io_failed": 0, 00:26:38.617 "io_timeout": 0, 00:26:38.617 "avg_latency_us": 2423.2113992537925, 00:26:38.617 "min_latency_us": 1652.6469565217392, 00:26:38.617 "max_latency_us": 10200.820869565217 00:26:38.617 } 00:26:38.617 ], 00:26:38.617 "core_count": 1 00:26:38.617 } 00:26:38.617 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.617 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.617 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.617 | .driver_specific 00:26:38.617 | .nvme_error 00:26:38.617 | .status_code 00:26:38.617 | .command_transient_transport_error' 00:26:38.617 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 426 > 0 )) 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2311424 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2311424 ']' 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2311424 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311424 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311424' 00:26:38.876 killing process with pid 2311424 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2311424 00:26:38.876 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.876 00:26:38.876 Latency(us) 00:26:38.876 [2024-11-20T14:35:42.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.876 [2024-11-20T14:35:42.784Z] =================================================================================================================== 00:26:38.876 [2024-11-20T14:35:42.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.876 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2311424 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2309758 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2309758 ']' 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2309758 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309758 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309758' 00:26:39.135 killing process with pid 2309758 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2309758 00:26:39.135 15:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2309758 00:26:39.135 00:26:39.135 real 0m13.840s 00:26:39.135 user 0m26.504s 00:26:39.135 sys 0m4.569s 00:26:39.135 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.135 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.135 ************************************ 00:26:39.135 END TEST nvmf_digest_error 00:26:39.135 ************************************ 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.394 rmmod nvme_tcp 00:26:39.394 rmmod nvme_fabrics 00:26:39.394 rmmod nvme_keyring 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2309758 ']' 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2309758 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2309758 ']' 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2309758 00:26:39.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2309758) - No such process 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2309758 is not found' 00:26:39.394 Process with pid 2309758 is not found 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.394 15:35:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.298 15:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.298 00:26:41.298 real 0m37.022s 00:26:41.298 user 0m56.164s 00:26:41.298 sys 0m13.777s 00:26:41.298 15:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.298 15:35:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:41.298 ************************************ 00:26:41.298 END TEST nvmf_digest 00:26:41.298 ************************************ 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.557 ************************************ 00:26:41.557 START TEST nvmf_bdevperf 00:26:41.557 ************************************ 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.557 * Looking for test storage... 00:26:41.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.557 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.558 --rc genhtml_branch_coverage=1 00:26:41.558 --rc genhtml_function_coverage=1 00:26:41.558 --rc genhtml_legend=1 00:26:41.558 --rc geninfo_all_blocks=1 00:26:41.558 --rc geninfo_unexecuted_blocks=1 00:26:41.558 00:26:41.558 ' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.558 --rc genhtml_branch_coverage=1 00:26:41.558 --rc genhtml_function_coverage=1 00:26:41.558 --rc genhtml_legend=1 00:26:41.558 --rc geninfo_all_blocks=1 00:26:41.558 --rc geninfo_unexecuted_blocks=1 00:26:41.558 00:26:41.558 ' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.558 --rc genhtml_branch_coverage=1 00:26:41.558 --rc genhtml_function_coverage=1 00:26:41.558 --rc genhtml_legend=1 00:26:41.558 --rc geninfo_all_blocks=1 00:26:41.558 --rc geninfo_unexecuted_blocks=1 00:26:41.558 00:26:41.558 ' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.558 --rc genhtml_branch_coverage=1 00:26:41.558 --rc genhtml_function_coverage=1 00:26:41.558 --rc genhtml_legend=1 00:26:41.558 --rc geninfo_all_blocks=1 00:26:41.558 --rc geninfo_unexecuted_blocks=1 00:26:41.558 00:26:41.558 ' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.558 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.817 15:35:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.388 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.388 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.388 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:26:48.389 00:26:48.389 --- 10.0.0.2 ping statistics --- 00:26:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.389 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:26:48.389 00:26:48.389 --- 10.0.0.1 ping statistics --- 00:26:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.389 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2315570 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2315570 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2315570 ']' 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 [2024-11-20 15:35:51.451890] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:48.389 [2024-11-20 15:35:51.451935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.389 [2024-11-20 15:35:51.529880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.389 [2024-11-20 15:35:51.572087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.389 [2024-11-20 15:35:51.572124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.389 [2024-11-20 15:35:51.572131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.389 [2024-11-20 15:35:51.572138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.389 [2024-11-20 15:35:51.572144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.389 [2024-11-20 15:35:51.573435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.389 [2024-11-20 15:35:51.573546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.389 [2024-11-20 15:35:51.573548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 [2024-11-20 15:35:51.708531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 Malloc0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.389 [2024-11-20 15:35:51.775913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:48.389 { 00:26:48.389 "params": { 00:26:48.389 "name": "Nvme$subsystem", 00:26:48.389 "trtype": "$TEST_TRANSPORT", 00:26:48.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.389 "adrfam": "ipv4", 00:26:48.389 "trsvcid": "$NVMF_PORT", 00:26:48.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.389 "hdgst": ${hdgst:-false}, 00:26:48.389 "ddgst": ${ddgst:-false} 00:26:48.389 }, 00:26:48.389 "method": "bdev_nvme_attach_controller" 00:26:48.389 } 00:26:48.389 EOF 00:26:48.389 )") 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:48.389 15:35:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:48.389 "params": { 00:26:48.389 "name": "Nvme1", 00:26:48.389 "trtype": "tcp", 00:26:48.390 "traddr": "10.0.0.2", 00:26:48.390 "adrfam": "ipv4", 00:26:48.390 "trsvcid": "4420", 00:26:48.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.390 "hdgst": false, 00:26:48.390 "ddgst": false 00:26:48.390 }, 00:26:48.390 "method": "bdev_nvme_attach_controller" 00:26:48.390 }' 00:26:48.390 [2024-11-20 15:35:51.826568] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:48.390 [2024-11-20 15:35:51.826616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315672 ] 00:26:48.390 [2024-11-20 15:35:51.884704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.390 [2024-11-20 15:35:51.926827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.390 Running I/O for 1 seconds... 00:26:49.324 11061.00 IOPS, 43.21 MiB/s 00:26:49.324 Latency(us) 00:26:49.324 [2024-11-20T14:35:53.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.324 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:49.324 Verification LBA range: start 0x0 length 0x4000 00:26:49.324 Nvme1n1 : 1.01 11133.43 43.49 0.00 0.00 11443.91 2094.30 12252.38 00:26:49.324 [2024-11-20T14:35:53.232Z] =================================================================================================================== 00:26:49.324 [2024-11-20T14:35:53.232Z] Total : 11133.43 43.49 0.00 0.00 11443.91 2094.30 12252.38 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2315906 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:49.582 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:49.582 { 00:26:49.582 "params": { 00:26:49.582 "name": "Nvme$subsystem", 00:26:49.582 "trtype": "$TEST_TRANSPORT", 00:26:49.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.583 "adrfam": "ipv4", 00:26:49.583 "trsvcid": "$NVMF_PORT", 00:26:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.583 "hdgst": ${hdgst:-false}, 00:26:49.583 "ddgst": ${ddgst:-false} 00:26:49.583 }, 00:26:49.583 "method": "bdev_nvme_attach_controller" 00:26:49.583 } 00:26:49.583 EOF 00:26:49.583 )") 00:26:49.583 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:49.583 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:49.583 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:49.583 15:35:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:49.583 "params": { 00:26:49.583 "name": "Nvme1", 00:26:49.583 "trtype": "tcp", 00:26:49.583 "traddr": "10.0.0.2", 00:26:49.583 "adrfam": "ipv4", 00:26:49.583 "trsvcid": "4420", 00:26:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.583 "hdgst": false, 00:26:49.583 "ddgst": false 00:26:49.583 }, 00:26:49.583 "method": "bdev_nvme_attach_controller" 00:26:49.583 }' 00:26:49.583 [2024-11-20 15:35:53.424361] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:49.583 [2024-11-20 15:35:53.424411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315906 ] 00:26:49.841 [2024-11-20 15:35:53.498450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.841 [2024-11-20 15:35:53.537361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.100 Running I/O for 15 seconds... 00:26:51.969 10969.00 IOPS, 42.85 MiB/s [2024-11-20T14:35:56.446Z] 11124.00 IOPS, 43.45 MiB/s [2024-11-20T14:35:56.446Z] 15:35:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2315570 00:26:52.538 15:35:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:52.538 [2024-11-20 15:35:56.392036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.538 [2024-11-20 15:35:56.392371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-11-20 15:35:56.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.539 [2024-11-20 15:35:56.392398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.539 [2024-11-20 15:35:56.392415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.539 [2024-11-20 15:35:56.392434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.539 [2024-11-20 15:35:56.392451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.539 [2024-11-20 15:35:56.392467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.539 [2024-11-20 15:35:56.392899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.539 [2024-11-20 15:35:56.392907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.540 [2024-11-20 15:35:56.392914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.392922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.540 [2024-11-20 15:35:56.392929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.540 [2024-11-20 15:35:56.392943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.540 [2024-11-20 15:35:56.393080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.540 [2024-11-20 15:35:56.393094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.540 [2024-11-20 15:35:56.393545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.540 [2024-11-20 15:35:56.393552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.393990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.393997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.394005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.394012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.394020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.394034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.394049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.394056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.541 [2024-11-20 15:35:56.394064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.541 [2024-11-20 15:35:56.394070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.542 [2024-11-20 15:35:56.394084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.542 [2024-11-20 15:35:56.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.542 [2024-11-20 15:35:56.394114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.542 [2024-11-20 15:35:56.394128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.542 [2024-11-20 15:35:56.394144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.394152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f44cf0 is same with the state(6) to be set 00:26:52.542 [2024-11-20 15:35:56.394160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.542 [2024-11-20 15:35:56.394165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.542 [2024-11-20 15:35:56.394173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:26:52.542 [2024-11-20 15:35:56.394180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.542 [2024-11-20 15:35:56.397063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.542 [2024-11-20 15:35:56.397115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.542 [2024-11-20 15:35:56.397735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.542 [2024-11-20 15:35:56.397752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.542 [2024-11-20 15:35:56.397760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.542 [2024-11-20 15:35:56.397939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.542 [2024-11-20 15:35:56.398124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.542 [2024-11-20 15:35:56.398133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.542 [2024-11-20 15:35:56.398141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.542 [2024-11-20 15:35:56.398149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.542 [2024-11-20 15:35:56.410334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.542 [2024-11-20 15:35:56.410746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.542 [2024-11-20 15:35:56.410764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.542 [2024-11-20 15:35:56.410772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.542 [2024-11-20 15:35:56.410945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.542 [2024-11-20 15:35:56.411128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.542 [2024-11-20 15:35:56.411136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.542 [2024-11-20 15:35:56.411143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.542 [2024-11-20 15:35:56.411150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.542 [2024-11-20 15:35:56.423190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.542 [2024-11-20 15:35:56.423557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.542 [2024-11-20 15:35:56.423602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.542 [2024-11-20 15:35:56.423625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.542 [2024-11-20 15:35:56.424141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.542 [2024-11-20 15:35:56.424319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.542 [2024-11-20 15:35:56.424327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.542 [2024-11-20 15:35:56.424333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.542 [2024-11-20 15:35:56.424339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.542 [2024-11-20 15:35:56.436093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.542 [2024-11-20 15:35:56.436547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.542 [2024-11-20 15:35:56.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.542 [2024-11-20 15:35:56.436572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.542 [2024-11-20 15:35:56.436749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.542 [2024-11-20 15:35:56.436926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.542 [2024-11-20 15:35:56.436939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.542 [2024-11-20 15:35:56.436946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.542 [2024-11-20 15:35:56.436960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.803 [2024-11-20 15:35:56.449034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.803 [2024-11-20 15:35:56.449486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-11-20 15:35:56.449503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.803 [2024-11-20 15:35:56.449510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.803 [2024-11-20 15:35:56.449687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.803 [2024-11-20 15:35:56.449864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.803 [2024-11-20 15:35:56.449872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.803 [2024-11-20 15:35:56.449878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.803 [2024-11-20 15:35:56.449885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.803 [2024-11-20 15:35:56.461959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.803 [2024-11-20 15:35:56.462383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-11-20 15:35:56.462400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.803 [2024-11-20 15:35:56.462407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.803 [2024-11-20 15:35:56.462579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.803 [2024-11-20 15:35:56.462751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.803 [2024-11-20 15:35:56.462759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.803 [2024-11-20 15:35:56.462769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.803 [2024-11-20 15:35:56.462775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.803 [2024-11-20 15:35:56.474789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.803 [2024-11-20 15:35:56.475211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.803 [2024-11-20 15:35:56.475255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.803 [2024-11-20 15:35:56.475278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.803 [2024-11-20 15:35:56.475858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.803 [2024-11-20 15:35:56.476465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.476474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.476480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.476486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.487680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.488116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.488185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.488676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.488849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.488857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.488864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.488870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.500528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.500921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.500936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.500943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.501137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.501310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.501318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.501324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.501330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.513358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.513788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.513831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.513854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.514370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.514544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.514552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.514558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.514564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.526297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.526713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.526730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.526738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.526915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.527104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.527113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.527119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.527126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.539175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.539567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.539583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.539589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.539752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.539915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.539923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.539929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.539935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.551980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.552426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.552452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.552624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.552797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.552805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.552812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.552818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.564779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.565201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.565245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.565268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.565781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.565960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.565969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.565975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.565982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.577676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.578082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.578099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.578106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.578278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.578451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.578459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.578465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.578471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.590489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.590886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.590902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.590908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.591099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.804 [2024-11-20 15:35:56.591276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.804 [2024-11-20 15:35:56.591285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.804 [2024-11-20 15:35:56.591291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.804 [2024-11-20 15:35:56.591297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.804 [2024-11-20 15:35:56.603365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.804 [2024-11-20 15:35:56.603792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.804 [2024-11-20 15:35:56.603836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.804 [2024-11-20 15:35:56.603859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.804 [2024-11-20 15:35:56.604453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.604887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.604903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.604917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.604930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.618337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.618831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.618852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.618863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.619124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.619379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.619390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.619399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.619408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.631430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.631842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.631858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.631865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.632044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.632217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.632225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.632235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.632242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.644307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.644730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.644747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.644754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.644926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.645106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.645115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.645121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.645127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.657456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.657867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.657884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.657892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.658075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.658254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.658263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.658270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.658277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.670645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.671057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.671074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.671082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.671259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.671444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.671452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.671459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.671465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.683552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.683955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.683971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.683994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.684167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.684339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.684347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.684354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.684360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.805 [2024-11-20 15:35:56.696475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.805 [2024-11-20 15:35:56.696891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.805 [2024-11-20 15:35:56.696907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:52.805 [2024-11-20 15:35:56.696914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:52.805 [2024-11-20 15:35:56.697093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:52.805 [2024-11-20 15:35:56.697265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.805 [2024-11-20 15:35:56.697273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.805 [2024-11-20 15:35:56.697279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.805 [2024-11-20 15:35:56.697285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.709655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.710076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.710093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.710100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.710275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.710441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.710449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.710455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.710461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.722533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.722922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.722938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.722954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.723142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.723314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.723323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.723329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.723335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.735446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.735788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.735804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.735811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.735991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.736164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.736172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.736179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.736185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.748331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.748759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.748803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.748826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.749257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.749430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.749438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.749445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.749451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.761149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.761573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.761589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.761596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.761768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.761944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.761959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.761966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.761972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.773991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.774432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.774449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.774458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.774632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.774807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.774815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.774822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.774828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.786916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.787353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.787398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.787422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.788017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.788210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.788221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.788227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.788234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.799835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.800255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.800272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.800279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.800452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.800625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.066 [2024-11-20 15:35:56.800633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.066 [2024-11-20 15:35:56.800643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.066 [2024-11-20 15:35:56.800649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.066 [2024-11-20 15:35:56.812649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.066 [2024-11-20 15:35:56.813062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.066 [2024-11-20 15:35:56.813079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.066 [2024-11-20 15:35:56.813086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.066 [2024-11-20 15:35:56.813258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.066 [2024-11-20 15:35:56.813430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.813438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.813444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.813451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.825570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.826004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.826050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.826073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.826653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.827259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.827287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.827307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.827326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 9500.67 IOPS, 37.11 MiB/s [2024-11-20T14:35:56.975Z] [2024-11-20 15:35:56.840894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.841463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.841509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.841532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.842124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.842425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.842437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.842447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.842455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.853925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.854385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.854430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.854453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.855044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.855471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.855479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.855485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.855491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.866789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.867229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.867245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.867252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.867424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.867596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.867604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.867611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.867617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.879794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.880111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.880128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.880136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.880308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.880481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.880489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.880496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.880502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.892936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.893299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.893316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.893330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.893509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.893687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.893695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.893701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.893707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.906012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.906389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.906396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.906573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.906751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.906760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.906766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.906773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.919090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.919469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.919486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.919493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.067 [2024-11-20 15:35:56.919671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.067 [2024-11-20 15:35:56.919849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.067 [2024-11-20 15:35:56.919857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.067 [2024-11-20 15:35:56.919864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.067 [2024-11-20 15:35:56.919871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.067 [2024-11-20 15:35:56.932149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.067 [2024-11-20 15:35:56.932509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.067 [2024-11-20 15:35:56.932526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.067 [2024-11-20 15:35:56.932533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.068 [2024-11-20 15:35:56.932706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.068 [2024-11-20 15:35:56.932883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.068 [2024-11-20 15:35:56.932891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.068 [2024-11-20 15:35:56.932898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.068 [2024-11-20 15:35:56.932904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.068 [2024-11-20 15:35:56.945146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.068 [2024-11-20 15:35:56.945519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-11-20 15:35:56.945535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.068 [2024-11-20 15:35:56.945542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.068 [2024-11-20 15:35:56.945713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.068 [2024-11-20 15:35:56.945887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.068 [2024-11-20 15:35:56.945895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.068 [2024-11-20 15:35:56.945901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.068 [2024-11-20 15:35:56.945907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.068 [2024-11-20 15:35:56.957957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.068 [2024-11-20 15:35:56.958259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.068 [2024-11-20 15:35:56.958276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.068 [2024-11-20 15:35:56.958283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.068 [2024-11-20 15:35:56.958456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.068 [2024-11-20 15:35:56.958628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.068 [2024-11-20 15:35:56.958636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.068 [2024-11-20 15:35:56.958642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.068 [2024-11-20 15:35:56.958648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:56.971110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:56.971400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:56.971416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:56.971424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:56.971600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:56.971785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:56.971793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:56.971803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:56.971809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:56.984125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:56.984495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:56.984543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:56.984566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:56.985161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:56.985734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:56.985742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:56.985748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:56.985754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:56.996986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:56.997336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:56.997352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:56.997360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:56.997532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:56.997705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:56.997713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:56.997719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:56.997726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.009785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.010086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.010103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.010111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.010282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.010454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.010463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.010469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.010475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.022875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.023245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.023288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.023311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.023794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.023974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.023983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.023990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.023996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.035710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.036109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.036127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.036134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.036307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.036479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.036487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.036494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.036500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.048641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.048968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.049001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.049009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.049181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.049353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.049362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.049368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.049374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.061571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.062001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.062046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.062078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.062660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.062903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.062912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.062919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.062926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.074371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.074824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.074868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.074891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.328 [2024-11-20 15:35:57.075484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.328 [2024-11-20 15:35:57.076015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.328 [2024-11-20 15:35:57.076025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.328 [2024-11-20 15:35:57.076033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.328 [2024-11-20 15:35:57.076041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.328 [2024-11-20 15:35:57.087259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.328 [2024-11-20 15:35:57.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.328 [2024-11-20 15:35:57.087695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.328 [2024-11-20 15:35:57.087702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.087874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.088053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.088062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.088068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.088075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.100145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.100550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.100566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.100573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.100744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.100921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.100929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.100935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.100941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.113027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.113328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.113344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.113351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.113524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.113696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.113704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.113710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.113717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.125867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.126286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.126303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.126310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.126481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.126654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.126662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.126668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.126675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.138734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.139078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.139135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.139158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.139736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.140008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.140017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.140027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.140033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.151582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.152020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.152037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.152044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.152216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.152392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.152400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.152407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.152413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.164434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.164821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.164838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.164845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.165029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.165202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.165210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.165216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.165222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.177520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.177959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.177976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.177983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.178176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.178355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.178365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.178377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.178384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.190614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.190962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.190986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.191158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.191331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.191339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.191345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.191351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.203589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.203971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.204016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.329 [2024-11-20 15:35:57.204039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.329 [2024-11-20 15:35:57.204620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.329 [2024-11-20 15:35:57.205214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.329 [2024-11-20 15:35:57.205241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.329 [2024-11-20 15:35:57.205262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.329 [2024-11-20 15:35:57.205281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.329 [2024-11-20 15:35:57.216416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.329 [2024-11-20 15:35:57.216803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.329 [2024-11-20 15:35:57.216848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.330 [2024-11-20 15:35:57.216870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.330 [2024-11-20 15:35:57.217336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.330 [2024-11-20 15:35:57.217510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.330 [2024-11-20 15:35:57.217518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.330 [2024-11-20 15:35:57.217525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.330 [2024-11-20 15:35:57.217531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.330 [2024-11-20 15:35:57.229498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.330 [2024-11-20 15:35:57.229974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.330 [2024-11-20 15:35:57.229992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.330 [2024-11-20 15:35:57.230003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.330 [2024-11-20 15:35:57.230183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.330 [2024-11-20 15:35:57.230363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.330 [2024-11-20 15:35:57.230373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.330 [2024-11-20 15:35:57.230379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.330 [2024-11-20 15:35:57.230385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.242640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.243036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-11-20 15:35:57.243053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.588 [2024-11-20 15:35:57.243061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.588 [2024-11-20 15:35:57.243238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.588 [2024-11-20 15:35:57.243416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.588 [2024-11-20 15:35:57.243424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.588 [2024-11-20 15:35:57.243431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.588 [2024-11-20 15:35:57.243437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.255806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.256222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-11-20 15:35:57.256239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.588 [2024-11-20 15:35:57.256247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.588 [2024-11-20 15:35:57.256424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.588 [2024-11-20 15:35:57.256602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.588 [2024-11-20 15:35:57.256611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.588 [2024-11-20 15:35:57.256617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.588 [2024-11-20 15:35:57.256624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.268982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.269426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-11-20 15:35:57.269469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.588 [2024-11-20 15:35:57.269491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.588 [2024-11-20 15:35:57.270083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.588 [2024-11-20 15:35:57.270328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.588 [2024-11-20 15:35:57.270336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.588 [2024-11-20 15:35:57.270342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.588 [2024-11-20 15:35:57.270349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.282042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.282476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-11-20 15:35:57.282493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.588 [2024-11-20 15:35:57.282500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.588 [2024-11-20 15:35:57.282677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.588 [2024-11-20 15:35:57.282856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.588 [2024-11-20 15:35:57.282864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.588 [2024-11-20 15:35:57.282870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.588 [2024-11-20 15:35:57.282877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.295071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.295497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-11-20 15:35:57.295514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.588 [2024-11-20 15:35:57.295522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.588 [2024-11-20 15:35:57.295698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.588 [2024-11-20 15:35:57.295877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.588 [2024-11-20 15:35:57.295885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.588 [2024-11-20 15:35:57.295892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.588 [2024-11-20 15:35:57.295899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.588 [2024-11-20 15:35:57.307937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.588 [2024-11-20 15:35:57.308384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.308426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.308448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.309038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.309575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.309583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.309593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.309600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.320955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.321303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.321320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.321327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.321501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.321672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.321681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.321688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.321694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.333910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.334363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.334380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.334388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.334565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.334743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.334752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.334758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.334765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.346806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.347258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.347276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.347283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.347456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.347629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.347637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.347644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.347650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.359671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.360104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.360121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.360128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.360300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.360472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.360481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.360487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.360493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.372541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.372978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.372994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.373001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.373185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.373348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.373356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.373362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.373368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.385376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.385699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.385714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.385721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.385893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.386072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.386081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.386087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.386093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.398191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.398645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.398661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.398671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.398844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.399022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.399031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.399038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.399043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.411046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.411439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.411455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.411462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.411625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.411787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.411795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.411801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.411807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.423952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.589 [2024-11-20 15:35:57.424395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-11-20 15:35:57.424411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.589 [2024-11-20 15:35:57.424418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.589 [2024-11-20 15:35:57.424580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.589 [2024-11-20 15:35:57.424749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.589 [2024-11-20 15:35:57.424758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.589 [2024-11-20 15:35:57.424764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.589 [2024-11-20 15:35:57.424770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.589 [2024-11-20 15:35:57.437077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.590 [2024-11-20 15:35:57.437531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-11-20 15:35:57.437575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.590 [2024-11-20 15:35:57.437599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.590 [2024-11-20 15:35:57.438021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.590 [2024-11-20 15:35:57.438202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.590 [2024-11-20 15:35:57.438211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.590 [2024-11-20 15:35:57.438217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.590 [2024-11-20 15:35:57.438223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.590 [2024-11-20 15:35:57.450122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.590 [2024-11-20 15:35:57.450561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-11-20 15:35:57.450605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.590 [2024-11-20 15:35:57.450628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.590 [2024-11-20 15:35:57.451128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.590 [2024-11-20 15:35:57.451302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.590 [2024-11-20 15:35:57.451310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.590 [2024-11-20 15:35:57.451317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.590 [2024-11-20 15:35:57.451323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.590 [2024-11-20 15:35:57.462923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.590 [2024-11-20 15:35:57.463374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-11-20 15:35:57.463414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.590 [2024-11-20 15:35:57.463439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.590 [2024-11-20 15:35:57.464036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.590 [2024-11-20 15:35:57.464425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.590 [2024-11-20 15:35:57.464442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.590 [2024-11-20 15:35:57.464455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.590 [2024-11-20 15:35:57.464469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.590 [2024-11-20 15:35:57.477810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.590 [2024-11-20 15:35:57.478336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-11-20 15:35:57.478357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.590 [2024-11-20 15:35:57.478368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.590 [2024-11-20 15:35:57.478621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.590 [2024-11-20 15:35:57.478875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.590 [2024-11-20 15:35:57.478886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.590 [2024-11-20 15:35:57.478900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.590 [2024-11-20 15:35:57.478909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.590 [2024-11-20 15:35:57.490981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.590 [2024-11-20 15:35:57.491419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-11-20 15:35:57.491436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.590 [2024-11-20 15:35:57.491443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.590 [2024-11-20 15:35:57.491621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.590 [2024-11-20 15:35:57.491798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.590 [2024-11-20 15:35:57.491806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.590 [2024-11-20 15:35:57.491812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.590 [2024-11-20 15:35:57.491818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.503878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.504242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.504286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.504309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.504888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.505258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.505275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.505289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.505302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.518725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.519219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.519240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.519250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.519501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.519753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.519765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.519774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.519782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.531685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.532120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.532144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.532316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.532488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.532496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.532503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.532509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.544712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.545093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.545109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.545116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.545288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.545461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.545469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.545476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.545482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.557513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.557913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.557968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.557992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.558573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.559146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.559155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.559161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.559167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.570424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.570877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.570919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.570963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.571547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.572138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.572164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.572171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.572177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.583280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.583636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.583680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.583702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.584188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.584352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.584359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.584365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.584371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.596238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.596666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.596681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.596688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.596850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.597018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.597027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.597033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.597038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.609110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.609424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.609439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.609446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.609608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.609774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.609781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.609787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.609793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.621978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.622342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.848 [2024-11-20 15:35:57.622386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.848 [2024-11-20 15:35:57.622409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.848 [2024-11-20 15:35:57.623003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.848 [2024-11-20 15:35:57.623230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.848 [2024-11-20 15:35:57.623238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.848 [2024-11-20 15:35:57.623244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.848 [2024-11-20 15:35:57.623250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.848 [2024-11-20 15:35:57.634877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.848 [2024-11-20 15:35:57.635325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.635342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.635349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.635521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.635692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.635700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.635706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.635712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.647715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.648140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.648185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.648208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.648607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.648770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.648778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.648787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.648793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.660503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.660918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.660933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.660940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.661133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.661305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.661313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.661320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.661326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.673351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.673775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.673790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.673797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.673965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.674152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.674160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.674166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.674173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.686596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.687034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.687052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.687061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.687239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.687418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.687427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.687435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.687441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.699447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.699874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.699890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.699897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.700075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.700248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.700256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.700263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.700268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.712374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.712797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.712812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.712819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.713004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.713177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.713185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.713192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.713198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.725171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.725619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.725635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.725642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.725818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.725997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.726006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.726012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.726018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.738039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.738458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.738475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.738485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.738647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.738810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.738818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.738824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.738829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.849 [2024-11-20 15:35:57.751191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.849 [2024-11-20 15:35:57.751632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-11-20 15:35:57.751649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:53.849 [2024-11-20 15:35:57.751657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:53.849 [2024-11-20 15:35:57.751834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:53.849 [2024-11-20 15:35:57.752017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.849 [2024-11-20 15:35:57.752026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.849 [2024-11-20 15:35:57.752032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.849 [2024-11-20 15:35:57.752039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.108 [2024-11-20 15:35:57.764073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.108 [2024-11-20 15:35:57.764469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-11-20 15:35:57.764485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.108 [2024-11-20 15:35:57.764492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.108 [2024-11-20 15:35:57.764654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.108 [2024-11-20 15:35:57.764817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.108 [2024-11-20 15:35:57.764824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.108 [2024-11-20 15:35:57.764830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.764836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.776865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.777225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.777241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.777248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.777420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.777596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.777605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.777611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.777617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.789783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.790228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.790272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.790295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.790768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.790939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.790953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.790961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.790967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.802723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.803166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.803183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.803192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.803365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.803539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.803547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.803553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.803559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.815526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.815968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.816013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.816036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.816581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.816754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.816762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.816772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.816778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.828441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.828872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.828919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.828942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.829551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.829981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.829990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.829996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.830003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 7125.50 IOPS, 27.83 MiB/s [2024-11-20T14:35:58.017Z] [2024-11-20 15:35:57.841417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.841839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.841856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.841863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.842040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.842213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.842221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.842228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.842234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.854253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.854678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.854721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.854744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.855338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.855846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.855854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.855861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.855866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.867113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.867523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.867562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.867587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.868180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.868763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.868788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.868808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.868828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.880125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.880547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.880563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.880570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.880733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.109 [2024-11-20 15:35:57.880896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.109 [2024-11-20 15:35:57.880904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.109 [2024-11-20 15:35:57.880910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.109 [2024-11-20 15:35:57.880916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.109 [2024-11-20 15:35:57.892959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.109 [2024-11-20 15:35:57.893405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-11-20 15:35:57.893449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.109 [2024-11-20 15:35:57.893473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.109 [2024-11-20 15:35:57.894004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.894178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.894186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.894192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.894198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.905887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.906232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.906251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.906258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.906420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.906583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.906591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.906597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.906603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.918696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.919074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.919091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.919098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.919270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.919443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.919451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.919457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.919463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.931501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.931933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.931955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.931963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.932150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.932322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.932330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.932337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.932344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.944661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.945115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.945160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.945183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.945771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.946021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.946029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.946036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.946042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.957695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.958019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.958035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.958042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.958205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.958369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.958376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.958383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.958389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.970570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.970989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.971005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.971012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.971174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.971335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.971343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.971349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.971354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.983494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.983924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.983940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.983952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.984139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.984311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.984319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.984329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.984335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:57.996445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:57.996904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:57.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:57.996985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:57.997565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:57.998159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:57.998185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:57.998205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:57.998225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.110 [2024-11-20 15:35:58.009399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.110 [2024-11-20 15:35:58.009835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-11-20 15:35:58.009851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.110 [2024-11-20 15:35:58.009858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.110 [2024-11-20 15:35:58.010041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.110 [2024-11-20 15:35:58.010220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.110 [2024-11-20 15:35:58.010228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.110 [2024-11-20 15:35:58.010234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.110 [2024-11-20 15:35:58.010241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.022448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.022876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.022919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.022941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.023439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.023612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.023620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.023627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.023633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.035312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.035736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.035751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.035758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.035920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.036111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.036120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.036126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.036132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.048146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.048507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.048550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.048574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.049169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.049584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.049592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.049598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.049604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.061079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.061498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.061514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.061520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.061684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.061846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.061854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.061860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.061866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.073903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.074306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.074326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.074333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.074495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.074658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.074666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.074671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.074677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.086762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.087106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.087122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.087130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.087301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.087472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.087480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.087487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.087493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.370 [2024-11-20 15:35:58.099608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.370 [2024-11-20 15:35:58.100063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.370 [2024-11-20 15:35:58.100107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.370 [2024-11-20 15:35:58.100130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.370 [2024-11-20 15:35:58.100547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.370 [2024-11-20 15:35:58.100720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.370 [2024-11-20 15:35:58.100728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.370 [2024-11-20 15:35:58.100735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.370 [2024-11-20 15:35:58.100742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.112476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.112890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.112905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.112912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.113103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.113279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.113288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.113295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.113301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.125339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.125762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.125777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.125784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.125952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.126139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.126147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.126153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.126160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.138228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.138646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.138661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.138668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.138830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.139016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.139025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.139031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.139037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.151044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.151434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.151450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.151457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.151619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.151782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.151789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.151801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.151807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.163839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.164283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.164300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.164307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.164479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.164650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.164658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.164664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.164670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.176690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.177151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.177195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.177219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.177696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.177868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.177876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.177882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.177888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.189586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.190017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.190033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.190039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.190202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.190364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.190372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.190378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.371 [2024-11-20 15:35:58.190384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.371 [2024-11-20 15:35:58.202830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.371 [2024-11-20 15:35:58.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.371 [2024-11-20 15:35:58.203286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.371 [2024-11-20 15:35:58.203294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.371 [2024-11-20 15:35:58.203472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.371 [2024-11-20 15:35:58.203649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.371 [2024-11-20 15:35:58.203658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.371 [2024-11-20 15:35:58.203665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.203673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.372 [2024-11-20 15:35:58.215731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.372 [2024-11-20 15:35:58.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.372 [2024-11-20 15:35:58.216091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.372 [2024-11-20 15:35:58.216098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.372 [2024-11-20 15:35:58.216262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.372 [2024-11-20 15:35:58.216425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.372 [2024-11-20 15:35:58.216433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.372 [2024-11-20 15:35:58.216439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.216445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.372 [2024-11-20 15:35:58.228660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.372 [2024-11-20 15:35:58.229086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.372 [2024-11-20 15:35:58.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.372 [2024-11-20 15:35:58.229110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.372 [2024-11-20 15:35:58.229284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.372 [2024-11-20 15:35:58.229456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.372 [2024-11-20 15:35:58.229464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.372 [2024-11-20 15:35:58.229470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.229477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.372 [2024-11-20 15:35:58.241504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.372 [2024-11-20 15:35:58.241922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.372 [2024-11-20 15:35:58.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.372 [2024-11-20 15:35:58.241954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.372 [2024-11-20 15:35:58.242142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.372 [2024-11-20 15:35:58.242314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.372 [2024-11-20 15:35:58.242322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.372 [2024-11-20 15:35:58.242329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.242335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.372 [2024-11-20 15:35:58.254360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.372 [2024-11-20 15:35:58.254692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.372 [2024-11-20 15:35:58.254708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.372 [2024-11-20 15:35:58.254715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.372 [2024-11-20 15:35:58.254877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.372 [2024-11-20 15:35:58.255065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.372 [2024-11-20 15:35:58.255073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.372 [2024-11-20 15:35:58.255079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.255086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.372 [2024-11-20 15:35:58.267250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.372 [2024-11-20 15:35:58.267659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.372 [2024-11-20 15:35:58.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.372 [2024-11-20 15:35:58.267681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.372 [2024-11-20 15:35:58.267844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.372 [2024-11-20 15:35:58.268030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.372 [2024-11-20 15:35:58.268039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.372 [2024-11-20 15:35:58.268045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.372 [2024-11-20 15:35:58.268051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.280358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.280716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.280733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.280740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.280917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.281102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.281111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.633 [2024-11-20 15:35:58.281117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.633 [2024-11-20 15:35:58.281124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.293473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.293908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.293924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.293931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.294114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.294291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.294300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.633 [2024-11-20 15:35:58.294306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.633 [2024-11-20 15:35:58.294313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.306654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.307064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.307081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.307088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.307266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.307444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.307452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.633 [2024-11-20 15:35:58.307459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.633 [2024-11-20 15:35:58.307466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.319809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.320214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.320231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.320238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.320415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.320593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.320601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.633 [2024-11-20 15:35:58.320612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.633 [2024-11-20 15:35:58.320619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.332999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.333412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.333429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.333436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.333625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.333808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.333816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.633 [2024-11-20 15:35:58.333822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.633 [2024-11-20 15:35:58.333829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.633 [2024-11-20 15:35:58.346177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.633 [2024-11-20 15:35:58.346586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.633 [2024-11-20 15:35:58.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.633 [2024-11-20 15:35:58.346610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.633 [2024-11-20 15:35:58.346787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.633 [2024-11-20 15:35:58.346970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.633 [2024-11-20 15:35:58.346980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.346986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.346992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.359410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.359752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.359768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.359776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.359964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.360150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.360159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.360165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.360172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.372462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.372842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.372858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.372866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.373054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.373243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.373252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.373258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.373265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.385514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.385941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.385965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.385973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.386151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.386330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.386338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.386345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.386352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.398594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.399024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.399041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.399049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.399226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.399404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.399413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.399419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.399426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.411781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.412193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.412210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.412220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.412398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.412576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.412583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.412590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.412597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.424982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.425343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.425360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.425368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.425545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.425723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.425731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.425738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.425744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.438103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.438515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.438532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.438539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.438717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.438895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.438903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.438910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.438917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.451190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.634 [2024-11-20 15:35:58.451564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.634 [2024-11-20 15:35:58.451572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.634 [2024-11-20 15:35:58.451750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.634 [2024-11-20 15:35:58.451931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.634 [2024-11-20 15:35:58.451940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.634 [2024-11-20 15:35:58.451953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.634 [2024-11-20 15:35:58.451960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.634 [2024-11-20 15:35:58.464380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.634 [2024-11-20 15:35:58.464788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.464839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.464862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.465377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.465556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.465564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.465570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.465577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.635 [2024-11-20 15:35:58.477408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.635 [2024-11-20 15:35:58.477826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.477842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.477849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.478028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.478202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.478210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.478217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.478223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.635 [2024-11-20 15:35:58.490204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.635 [2024-11-20 15:35:58.490504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.490520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.490527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.490690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.490852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.490860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.490870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.490876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.635 [2024-11-20 15:35:58.503113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.635 [2024-11-20 15:35:58.503531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.503555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.503726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.503902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.503910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.503917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.503923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.635 [2024-11-20 15:35:58.515964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.635 [2024-11-20 15:35:58.516336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.516360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.516533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.516705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.516713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.516719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.516725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.635 [2024-11-20 15:35:58.528870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.635 [2024-11-20 15:35:58.529249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.635 [2024-11-20 15:35:58.529265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.635 [2024-11-20 15:35:58.529273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.635 [2024-11-20 15:35:58.529445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.635 [2024-11-20 15:35:58.529617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.635 [2024-11-20 15:35:58.529626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.635 [2024-11-20 15:35:58.529632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.635 [2024-11-20 15:35:58.529638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.895 [2024-11-20 15:35:58.541831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.895 [2024-11-20 15:35:58.542259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.895 [2024-11-20 15:35:58.542276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.895 [2024-11-20 15:35:58.542283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.895 [2024-11-20 15:35:58.542459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.895 [2024-11-20 15:35:58.542636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.895 [2024-11-20 15:35:58.542645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.895 [2024-11-20 15:35:58.542651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.895 [2024-11-20 15:35:58.542657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.895 [2024-11-20 15:35:58.554761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.895 [2024-11-20 15:35:58.555179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.895 [2024-11-20 15:35:58.555195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.895 [2024-11-20 15:35:58.555202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.895 [2024-11-20 15:35:58.555374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.895 [2024-11-20 15:35:58.555546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.555555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.555561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.555567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.567624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.568074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.568118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.568142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.568685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.568858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.568866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.568873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.568879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.582773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.583302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.583325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.583339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.583592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.583846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.583858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.583867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.583876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.595704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.596149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.596177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.596185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.596357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.596529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.596538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.596544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.596550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.608595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.609067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.609111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.609133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.609713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.609901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.609909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.609915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.609921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.621616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.622040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.622057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.622064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.622237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.622414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.622422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.622428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.622434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.634514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.634938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.634960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.634967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.635154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.635327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.635335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.635341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.635348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.647335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.647772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.647789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.647796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.647973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.648147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.648155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.648161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.648167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.660214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.660591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.660636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.660658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.661252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.661837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.661862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.661902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.661909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.673033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.673404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.673420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.673427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.673600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.896 [2024-11-20 15:35:58.673772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.896 [2024-11-20 15:35:58.673780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.896 [2024-11-20 15:35:58.673786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.896 [2024-11-20 15:35:58.673793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.896 [2024-11-20 15:35:58.685838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.896 [2024-11-20 15:35:58.686258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.896 [2024-11-20 15:35:58.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.896 [2024-11-20 15:35:58.686280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.896 [2024-11-20 15:35:58.686452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.686625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.686633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.686640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.686646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.698790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.699166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.699184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.699191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.699363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.699536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.699544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.699551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.699557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.711835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.712270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.712288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.712295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.712473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.712651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.712659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.712666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.712673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.725034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.725465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.725482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.725490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.725667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.725846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.725854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.725861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.725868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.738015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.738457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.738525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.739038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.739212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.739220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.739226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.739233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.750891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.751325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.751342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.751352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.751524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.751696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.751704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.751710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.751716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.763730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.764111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.764156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.764179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.764757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.764956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.764964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.764970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.764976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.776625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.777024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.777041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.777048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.777211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.777374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.777381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.777387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.777393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.897 [2024-11-20 15:35:58.789550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.897 [2024-11-20 15:35:58.789898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.897 [2024-11-20 15:35:58.789914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:54.897 [2024-11-20 15:35:58.789921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:54.897 [2024-11-20 15:35:58.790100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:54.897 [2024-11-20 15:35:58.790279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.897 [2024-11-20 15:35:58.790287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.897 [2024-11-20 15:35:58.790293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.897 [2024-11-20 15:35:58.790300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.158 [2024-11-20 15:35:58.802659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.158 [2024-11-20 15:35:58.803088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-11-20 15:35:58.803133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.158 [2024-11-20 15:35:58.803155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.158 [2024-11-20 15:35:58.803733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.158 [2024-11-20 15:35:58.804274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.158 [2024-11-20 15:35:58.804283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.158 [2024-11-20 15:35:58.804289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.158 [2024-11-20 15:35:58.804295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.158 [2024-11-20 15:35:58.815510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.158 [2024-11-20 15:35:58.815903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-11-20 15:35:58.815921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.158 [2024-11-20 15:35:58.815928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.158 [2024-11-20 15:35:58.816106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.158 [2024-11-20 15:35:58.816278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.158 [2024-11-20 15:35:58.816286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.158 [2024-11-20 15:35:58.816292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.158 [2024-11-20 15:35:58.816299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.828423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.828849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.828892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.828915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.829371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.829553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.829563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.829572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.829580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 5700.40 IOPS, 22.27 MiB/s [2024-11-20T14:35:59.067Z] [2024-11-20 15:35:58.841295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.841789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.841835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.841859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.842376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.842541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.842549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.842555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.842561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.854233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.854621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.854636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.854643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.854806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.854974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.854982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.854988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.854994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.867128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.867548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.867565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.867572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.867743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.867916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.867924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.867930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.867936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.880116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.880521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.880538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.880544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.880708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.880870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.880878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.880885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.880890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.892921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.893367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.893539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.893712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.893720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.893727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.893733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.905845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.906264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.906308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.906332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.906911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.907483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.907492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.907498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.907504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.918752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.919192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.919251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.919275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.919855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.920441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.920450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.920456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.920462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.931575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.931920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.931936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.931943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.932122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.932294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.932302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.932308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.159 [2024-11-20 15:35:58.932315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.159 [2024-11-20 15:35:58.944594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.159 [2024-11-20 15:35:58.944921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.159 [2024-11-20 15:35:58.944969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.159 [2024-11-20 15:35:58.944996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.159 [2024-11-20 15:35:58.945531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.159 [2024-11-20 15:35:58.945704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.159 [2024-11-20 15:35:58.945712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.159 [2024-11-20 15:35:58.945718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:58.945724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:58.957379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:58.957813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:58.957856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:58.957879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:58.958439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:58.958613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:58.958621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:58.958627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:58.958633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:58.970278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:58.970641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:58.970657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:58.970665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:58.970843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:58.971027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:58.971036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:58.971044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:58.971050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:58.983410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:58.983818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:58.983835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:58.983842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:58.984026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:58.984205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:58.984214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:58.984220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:58.984226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:58.996258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:58.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:58.996716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:58.996739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:58.997336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:58.997803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:58.997811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:58.997821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:58.997827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:59.009187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:59.009582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:59.009599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:59.009606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:59.009778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:59.009962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:59.009970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:59.009976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:59.009983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:59.022062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:59.022466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:59.022483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:59.022490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:59.022667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:59.022845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:59.022854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:59.022861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:59.022867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:59.034940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:59.035360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:59.035376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:59.035383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:59.035555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:59.035727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:59.035735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:59.035741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:59.035748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:59.047776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:59.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:59.048269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:59.048291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:59.048801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:59.048980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:59.048989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:59.048995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:59.049002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.160 [2024-11-20 15:35:59.060810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.160 [2024-11-20 15:35:59.061214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.160 [2024-11-20 15:35:59.061232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.160 [2024-11-20 15:35:59.061239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.160 [2024-11-20 15:35:59.061416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.160 [2024-11-20 15:35:59.061595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.160 [2024-11-20 15:35:59.061603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.160 [2024-11-20 15:35:59.061609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.160 [2024-11-20 15:35:59.061615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.073710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.074125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.074142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.074149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.074322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.074495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.074503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.420 [2024-11-20 15:35:59.074510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.420 [2024-11-20 15:35:59.074516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.086557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.086986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.087038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.087061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.087613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.087777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.087784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.420 [2024-11-20 15:35:59.087790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.420 [2024-11-20 15:35:59.087796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.099458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.099852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.099868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.099875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.100063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.100237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.100245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.420 [2024-11-20 15:35:59.100251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.420 [2024-11-20 15:35:59.100257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.112298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.112688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.112703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.112710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.112872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.113060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.113068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.420 [2024-11-20 15:35:59.113075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.420 [2024-11-20 15:35:59.113081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.125192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.125514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.125530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.125537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.125703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.125865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.125873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.420 [2024-11-20 15:35:59.125879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.420 [2024-11-20 15:35:59.125884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.420 [2024-11-20 15:35:59.138088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.420 [2024-11-20 15:35:59.138483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.420 [2024-11-20 15:35:59.138526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.420 [2024-11-20 15:35:59.138549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.420 [2024-11-20 15:35:59.139005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.420 [2024-11-20 15:35:59.139169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.420 [2024-11-20 15:35:59.139176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.421 [2024-11-20 15:35:59.139182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.421 [2024-11-20 15:35:59.139188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.421 [2024-11-20 15:35:59.150916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.421 [2024-11-20 15:35:59.151310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.421 [2024-11-20 15:35:59.151326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.421 [2024-11-20 15:35:59.151333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.421 [2024-11-20 15:35:59.151505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.421 [2024-11-20 15:35:59.151677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.421 [2024-11-20 15:35:59.151685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.421 [2024-11-20 15:35:59.151692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.421 [2024-11-20 15:35:59.151698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.421 [2024-11-20 15:35:59.163718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.421 [2024-11-20 15:35:59.164113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.421 [2024-11-20 15:35:59.164129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.421 [2024-11-20 15:35:59.164136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.421 [2024-11-20 15:35:59.164299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.421 [2024-11-20 15:35:59.164461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.421 [2024-11-20 15:35:59.164469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.421 [2024-11-20 15:35:59.164478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.421 [2024-11-20 15:35:59.164485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.421 [2024-11-20 15:35:59.176641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.421 [2024-11-20 15:35:59.177008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.421 [2024-11-20 15:35:59.177025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.421 [2024-11-20 15:35:59.177031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.421 [2024-11-20 15:35:59.177194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.421 [2024-11-20 15:35:59.177357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.421 [2024-11-20 15:35:59.177365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.421 [2024-11-20 15:35:59.177371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.421 [2024-11-20 15:35:59.177377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.421 [2024-11-20 15:35:59.189546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.421 [2024-11-20 15:35:59.189923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.421 [2024-11-20 15:35:59.189980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.421 [2024-11-20 15:35:59.190004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.422 [2024-11-20 15:35:59.190582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.422 [2024-11-20 15:35:59.191103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.422 [2024-11-20 15:35:59.191112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.422 [2024-11-20 15:35:59.191118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.422 [2024-11-20 15:35:59.191125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.422 [2024-11-20 15:35:59.202611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.422 [2024-11-20 15:35:59.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.422 [2024-11-20 15:35:59.203044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.422 [2024-11-20 15:35:59.203052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.422 [2024-11-20 15:35:59.203223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.422 [2024-11-20 15:35:59.203399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.422 [2024-11-20 15:35:59.203407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.422 [2024-11-20 15:35:59.203414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.422 [2024-11-20 15:35:59.203420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.422 [2024-11-20 15:35:59.215464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.422 [2024-11-20 15:35:59.215875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.422 [2024-11-20 15:35:59.215892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.422 [2024-11-20 15:35:59.215899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.422 [2024-11-20 15:35:59.216078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.422 [2024-11-20 15:35:59.216250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.422 [2024-11-20 15:35:59.216259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.422 [2024-11-20 15:35:59.216265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.422 [2024-11-20 15:35:59.216271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.422 [2024-11-20 15:35:59.228370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.422 [2024-11-20 15:35:59.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.422 [2024-11-20 15:35:59.228799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.422 [2024-11-20 15:35:59.228806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.422 [2024-11-20 15:35:59.228984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.422 [2024-11-20 15:35:59.229157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.422 [2024-11-20 15:35:59.229166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.423 [2024-11-20 15:35:59.229172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.423 [2024-11-20 15:35:59.229178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.423 [2024-11-20 15:35:59.241457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.423 [2024-11-20 15:35:59.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.423 [2024-11-20 15:35:59.241917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.423 [2024-11-20 15:35:59.241924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.423 [2024-11-20 15:35:59.242108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.423 [2024-11-20 15:35:59.242286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.423 [2024-11-20 15:35:59.242294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.423 [2024-11-20 15:35:59.242302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.423 [2024-11-20 15:35:59.242310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.423 [2024-11-20 15:35:59.254489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.423 [2024-11-20 15:35:59.254907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.423 [2024-11-20 15:35:59.254971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.423 [2024-11-20 15:35:59.254995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.423 [2024-11-20 15:35:59.255505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.423 [2024-11-20 15:35:59.255678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.423 [2024-11-20 15:35:59.255686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.423 [2024-11-20 15:35:59.255693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.423 [2024-11-20 15:35:59.255699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.423 [2024-11-20 15:35:59.267290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.423 [2024-11-20 15:35:59.267684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.424 [2024-11-20 15:35:59.267699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.424 [2024-11-20 15:35:59.267706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.424 [2024-11-20 15:35:59.267868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.424 [2024-11-20 15:35:59.268038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.424 [2024-11-20 15:35:59.268046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.424 [2024-11-20 15:35:59.268052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.424 [2024-11-20 15:35:59.268058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.424 [2024-11-20 15:35:59.280136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.424 [2024-11-20 15:35:59.280525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.424 [2024-11-20 15:35:59.280541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.424 [2024-11-20 15:35:59.280547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.424 [2024-11-20 15:35:59.280710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.424 [2024-11-20 15:35:59.280872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.424 [2024-11-20 15:35:59.280880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.424 [2024-11-20 15:35:59.280886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.424 [2024-11-20 15:35:59.280892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.424 [2024-11-20 15:35:59.292927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.424 [2024-11-20 15:35:59.293349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.424 [2024-11-20 15:35:59.293365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.424 [2024-11-20 15:35:59.293372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.424 [2024-11-20 15:35:59.293548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.424 [2024-11-20 15:35:59.293721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.425 [2024-11-20 15:35:59.293729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.425 [2024-11-20 15:35:59.293735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.425 [2024-11-20 15:35:59.293742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.425 [2024-11-20 15:35:59.305820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.425 [2024-11-20 15:35:59.306192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.425 [2024-11-20 15:35:59.306209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.425 [2024-11-20 15:35:59.306216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.425 [2024-11-20 15:35:59.306388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.425 [2024-11-20 15:35:59.306559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.425 [2024-11-20 15:35:59.306567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.425 [2024-11-20 15:35:59.306574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.425 [2024-11-20 15:35:59.306580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.425 [2024-11-20 15:35:59.318605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.425 [2024-11-20 15:35:59.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.425 [2024-11-20 15:35:59.319083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.425 [2024-11-20 15:35:59.319090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.425 [2024-11-20 15:35:59.319263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.425 [2024-11-20 15:35:59.319436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.425 [2024-11-20 15:35:59.319445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.425 [2024-11-20 15:35:59.319451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.425 [2024-11-20 15:35:59.319457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 [2024-11-20 15:35:59.331437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.687 [2024-11-20 15:35:59.331878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.687 [2024-11-20 15:35:59.331895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.687 [2024-11-20 15:35:59.331904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.687 [2024-11-20 15:35:59.332081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.687 [2024-11-20 15:35:59.332254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.687 [2024-11-20 15:35:59.332262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.687 [2024-11-20 15:35:59.332272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.687 [2024-11-20 15:35:59.332278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 [2024-11-20 15:35:59.344379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.687 [2024-11-20 15:35:59.344800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.687 [2024-11-20 15:35:59.344816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.687 [2024-11-20 15:35:59.344823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.687 [2024-11-20 15:35:59.345003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.687 [2024-11-20 15:35:59.345175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.687 [2024-11-20 15:35:59.345183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.687 [2024-11-20 15:35:59.345190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.687 [2024-11-20 15:35:59.345195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 [2024-11-20 15:35:59.357314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.687 [2024-11-20 15:35:59.357756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.687 [2024-11-20 15:35:59.357772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.687 [2024-11-20 15:35:59.357779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.687 [2024-11-20 15:35:59.357956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.687 [2024-11-20 15:35:59.358127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.687 [2024-11-20 15:35:59.358135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.687 [2024-11-20 15:35:59.358142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.687 [2024-11-20 15:35:59.358148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 [2024-11-20 15:35:59.370120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.687 [2024-11-20 15:35:59.370558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.687 [2024-11-20 15:35:59.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.687 [2024-11-20 15:35:59.370612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.687 [2024-11-20 15:35:59.371206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.687 [2024-11-20 15:35:59.371426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.687 [2024-11-20 15:35:59.371434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.687 [2024-11-20 15:35:59.371440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.687 [2024-11-20 15:35:59.371446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 [2024-11-20 15:35:59.383028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.687 [2024-11-20 15:35:59.383448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.687 [2024-11-20 15:35:59.383464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.687 [2024-11-20 15:35:59.383470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.687 [2024-11-20 15:35:59.383634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.687 [2024-11-20 15:35:59.383796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.687 [2024-11-20 15:35:59.383804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.687 [2024-11-20 15:35:59.383810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.687 [2024-11-20 15:35:59.383816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2315570 Killed "${NVMF_APP[@]}" "$@" 00:26:55.687 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:55.687 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:55.687 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2316832 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2316832 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2316832 ']' 00:26:55.688 [2024-11-20 15:35:59.396230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.688 [2024-11-20 15:35:59.396659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.396677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.396684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.688 [2024-11-20 15:35:59.396861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.397046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.397056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.397064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.397070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.688 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.688 [2024-11-20 15:35:59.409269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.409702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.409718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.409726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.409903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.410087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.410096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.410102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.410109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.422465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.422901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.422918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.422925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.423108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.423287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.423295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.423302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.423308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.435473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.435927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.435944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.435959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.436137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.436315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.436324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.436330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.436336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.443129] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:26:55.688 [2024-11-20 15:35:59.443170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.688 [2024-11-20 15:35:59.448573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.448978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.448995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.449003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.449181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.449359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.449368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.449374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.449382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.461543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.461902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.461919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.461927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.462112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.462290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.462299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.462306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.462313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.474532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.474861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.474885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.474894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.475079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.475258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.475267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.475274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.475281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.487635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.488048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.488066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.488074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.488251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.688 [2024-11-20 15:35:59.488427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.688 [2024-11-20 15:35:59.488437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.688 [2024-11-20 15:35:59.488445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.688 [2024-11-20 15:35:59.488452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.688 [2024-11-20 15:35:59.500807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.688 [2024-11-20 15:35:59.501273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.688 [2024-11-20 15:35:59.501290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.688 [2024-11-20 15:35:59.501298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.688 [2024-11-20 15:35:59.501475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.501652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.501662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.501670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.501677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.513878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.514316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.514340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.514518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.514697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.514705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.514712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.514719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.524108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.689 [2024-11-20 15:35:59.527039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.527482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.527500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.527511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.527690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.527869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.527877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.527884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.527890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.540027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.540487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.540505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.540513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.540691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.540870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.540878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.540885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.540892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.553013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.553439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.553456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.553463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.553640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.553817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.553825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.553832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.553838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.566034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.566455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.689 [2024-11-20 15:35:59.566480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.689 [2024-11-20 15:35:59.566483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.566487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.689 [2024-11-20 15:35:59.566499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.689 [2024-11-20 15:35:59.566500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.566505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.689 [2024-11-20 15:35:59.566509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.566688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.566866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.566874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.566880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.566887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.689 [2024-11-20 15:35:59.567897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.689 [2024-11-20 15:35:59.567994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.689 [2024-11-20 15:35:59.567993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.689 [2024-11-20 15:35:59.579094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.689 [2024-11-20 15:35:59.579545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.689 [2024-11-20 15:35:59.579564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.689 [2024-11-20 15:35:59.579572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.689 [2024-11-20 15:35:59.579751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.689 [2024-11-20 15:35:59.579930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.689 [2024-11-20 15:35:59.579939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.689 [2024-11-20 15:35:59.579954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.689 [2024-11-20 15:35:59.579962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.950 [2024-11-20 15:35:59.592164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.950 [2024-11-20 15:35:59.592613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.950 [2024-11-20 15:35:59.592632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.950 [2024-11-20 15:35:59.592641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.950 [2024-11-20 15:35:59.592820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.950 [2024-11-20 15:35:59.593004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.950 [2024-11-20 15:35:59.593013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.950 [2024-11-20 15:35:59.593020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.950 [2024-11-20 15:35:59.593027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.950 [2024-11-20 15:35:59.605233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.950 [2024-11-20 15:35:59.605662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.950 [2024-11-20 15:35:59.605682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.605691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.605871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.606054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.606064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.606070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.606077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 [2024-11-20 15:35:59.618440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.618886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.618905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.618914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.619099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.619277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.619286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.619293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.619300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 [2024-11-20 15:35:59.631500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.631843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.631862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.631870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.632053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.632245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.632255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.632262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.632270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 [2024-11-20 15:35:59.644624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.645036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.645054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.645067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.645245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.645423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.645431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.645439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.645446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 [2024-11-20 15:35:59.657793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.658251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.658258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.658436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.658615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.658623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.658629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.658636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.951 [2024-11-20 15:35:59.670983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.671260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.671277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.671285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.671463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.671641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.671650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.671656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.671663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 [2024-11-20 15:35:59.684174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.684602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.684623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.684630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.951 [2024-11-20 15:35:59.684807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.951 [2024-11-20 15:35:59.684989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.951 [2024-11-20 15:35:59.684999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.951 [2024-11-20 15:35:59.685005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.951 [2024-11-20 15:35:59.685011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.951 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.951 [2024-11-20 15:35:59.697346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.951 [2024-11-20 15:35:59.697685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.951 [2024-11-20 15:35:59.697702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.951 [2024-11-20 15:35:59.697709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.697886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 [2024-11-20 15:35:59.698069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.698079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.698085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.698092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 [2024-11-20 15:35:59.703397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.952 [2024-11-20 15:35:59.710440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 [2024-11-20 15:35:59.710730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-11-20 15:35:59.710747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.952 [2024-11-20 15:35:59.710754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.710932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 [2024-11-20 15:35:59.711117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.711128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.711135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.711141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 [2024-11-20 15:35:59.723502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 [2024-11-20 15:35:59.723929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-11-20 15:35:59.723946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.952 [2024-11-20 15:35:59.723959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.724136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 [2024-11-20 15:35:59.724314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.724322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.724329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.724336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 [2024-11-20 15:35:59.736552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 [2024-11-20 15:35:59.736899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-11-20 15:35:59.736916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.952 [2024-11-20 15:35:59.736924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.737106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 [2024-11-20 15:35:59.737286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.737295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.737301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.737308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 Malloc0 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.952 [2024-11-20 15:35:59.749670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 [2024-11-20 15:35:59.750083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-11-20 15:35:59.750102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.952 [2024-11-20 15:35:59.750109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.750287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 [2024-11-20 15:35:59.750471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.750480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.750487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.750493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.952 [2024-11-20 15:35:59.762844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.952 [2024-11-20 15:35:59.763268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-11-20 15:35:59.763286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1b500 with addr=10.0.0.2, port=4420 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.952 [2024-11-20 15:35:59.763294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b500 is same with the state(6) to be set 00:26:55.952 [2024-11-20 15:35:59.763473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1b500 (9): Bad file descriptor 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.952 [2024-11-20 15:35:59.763650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.952 [2024-11-20 15:35:59.763661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.952 [2024-11-20 15:35:59.763668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.952 [2024-11-20 15:35:59.763675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.952 [2024-11-20 15:35:59.766058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.952 15:35:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2315906 00:26:55.952 [2024-11-20 15:35:59.776051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.952 4750.33 IOPS, 18.56 MiB/s [2024-11-20T14:35:59.860Z] [2024-11-20 15:35:59.843095] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:58.260 5640.14 IOPS, 22.03 MiB/s [2024-11-20T14:36:03.103Z] 6318.25 IOPS, 24.68 MiB/s [2024-11-20T14:36:04.039Z] 6841.33 IOPS, 26.72 MiB/s [2024-11-20T14:36:04.975Z] 7274.60 IOPS, 28.42 MiB/s [2024-11-20T14:36:06.028Z] 7634.00 IOPS, 29.82 MiB/s [2024-11-20T14:36:06.963Z] 7912.50 IOPS, 30.91 MiB/s [2024-11-20T14:36:07.899Z] 8158.15 IOPS, 31.87 MiB/s [2024-11-20T14:36:09.275Z] 8368.36 IOPS, 32.69 MiB/s [2024-11-20T14:36:09.275Z] 8541.67 IOPS, 33.37 MiB/s 00:27:05.367 Latency(us) 00:27:05.367 [2024-11-20T14:36:09.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.367 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.367 Verification LBA range: start 0x0 length 0x4000 00:27:05.367 Nvme1n1 : 15.05 8514.63 33.26 10940.05 0.00 6541.95 438.09 44222.55 00:27:05.367 [2024-11-20T14:36:09.275Z] =================================================================================================================== 00:27:05.367 [2024-11-20T14:36:09.275Z] Total : 8514.63 33.26 10940.05 0.00 6541.95 438.09 44222.55 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.367 rmmod nvme_tcp 00:27:05.367 rmmod nvme_fabrics 00:27:05.367 rmmod nvme_keyring 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2316832 ']' 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2316832 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2316832 ']' 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2316832 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316832 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316832' 00:27:05.367 killing process with pid 2316832 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2316832 00:27:05.367 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2316832 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.625 15:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.162 00:27:08.162 real 0m26.192s 00:27:08.162 user 1m1.098s 00:27:08.162 sys 0m6.839s 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 ************************************ 00:27:08.162 END TEST nvmf_bdevperf 00:27:08.162 ************************************ 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.162 ************************************ 00:27:08.162 START TEST nvmf_target_disconnect 00:27:08.162 ************************************ 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.162 * Looking for test storage... 00:27:08.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.162 --rc genhtml_branch_coverage=1 00:27:08.162 --rc genhtml_function_coverage=1 00:27:08.162 --rc genhtml_legend=1 00:27:08.162 --rc geninfo_all_blocks=1 00:27:08.162 --rc geninfo_unexecuted_blocks=1 00:27:08.162 00:27:08.162 ' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.162 --rc genhtml_branch_coverage=1 00:27:08.162 --rc genhtml_function_coverage=1 00:27:08.162 --rc genhtml_legend=1 00:27:08.162 --rc geninfo_all_blocks=1 00:27:08.162 --rc geninfo_unexecuted_blocks=1 00:27:08.162 00:27:08.162 ' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.162 --rc genhtml_branch_coverage=1 00:27:08.162 --rc genhtml_function_coverage=1 00:27:08.162 --rc genhtml_legend=1 00:27:08.162 --rc geninfo_all_blocks=1 00:27:08.162 --rc geninfo_unexecuted_blocks=1 00:27:08.162 00:27:08.162 ' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.162 --rc genhtml_branch_coverage=1 00:27:08.162 --rc genhtml_function_coverage=1 00:27:08.162 --rc genhtml_legend=1 00:27:08.162 --rc geninfo_all_blocks=1 00:27:08.162 --rc geninfo_unexecuted_blocks=1 00:27:08.162 00:27:08.162 ' 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.162 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.163 15:36:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:13.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.437 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:13.697 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:13.697 Found net devices under 0000:86:00.0: cvl_0_0 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:13.697 Found net devices under 0000:86:00.1: cvl_0_1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.697 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:27:13.697 00:27:13.697 --- 10.0.0.2 ping statistics --- 00:27:13.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.698 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:13.698 00:27:13.698 --- 10.0.0.1 ping statistics --- 00:27:13.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.698 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:13.698 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.957 ************************************ 00:27:13.957 START TEST nvmf_target_disconnect_tc1 00:27:13.957 ************************************ 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:13.957 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.957 [2024-11-20 15:36:17.759031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-11-20 15:36:17.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7dab0 with addr=10.0.0.2, port=4420 00:27:13.958 [2024-11-20 15:36:17.759189] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:13.958 [2024-11-20 15:36:17.759214] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:13.958 [2024-11-20 15:36:17.759234] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:13.958 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:13.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:13.958 Initializing NVMe Controllers 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:13.958 00:27:13.958 real 0m0.116s 00:27:13.958 user 0m0.047s 00:27:13.958 sys 0m0.070s 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.958 ************************************ 00:27:13.958 END TEST nvmf_target_disconnect_tc1 00:27:13.958 ************************************ 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.958 ************************************ 00:27:13.958 START TEST nvmf_target_disconnect_tc2 00:27:13.958 ************************************ 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2322513 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2322513 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2322513 ']' 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.958 15:36:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.217 [2024-11-20 15:36:17.903220] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:14.217 [2024-11-20 15:36:17.903266] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.217 [2024-11-20 15:36:17.981956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.217 [2024-11-20 15:36:18.022002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.217 [2024-11-20 15:36:18.022043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.217 [2024-11-20 15:36:18.022053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.217 [2024-11-20 15:36:18.022060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.217 [2024-11-20 15:36:18.022066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.217 [2024-11-20 15:36:18.023764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.217 [2024-11-20 15:36:18.023792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.217 [2024-11-20 15:36:18.024214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.217 [2024-11-20 15:36:18.024216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 Malloc0 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 [2024-11-20 15:36:18.209801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 [2024-11-20 15:36:18.242072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2322539 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:14.476 15:36:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.382 15:36:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2322513 00:27:16.382 15:36:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 [2024-11-20 15:36:20.270856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 [2024-11-20 15:36:20.271076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Write completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.382 starting I/O failed 00:27:16.382 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 [2024-11-20 15:36:20.271276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Write completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 Read completed with error (sct=0, sc=8) 00:27:16.383 starting I/O failed 00:27:16.383 [2024-11-20 15:36:20.271472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.383 [2024-11-20 15:36:20.271654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.271678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.271789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.271800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.271896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.271907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.272959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.272969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.273854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.273885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.274042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.274207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.274372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.274507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 15:36:20.274666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 15:36:20.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.274845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.274968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.275938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.275952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.276977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.277880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.277911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.278038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.278071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.278278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.278309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 15:36:20.278418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 15:36:20.278450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.278564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.278591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.278655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.278665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.278750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.278760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.278892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.278902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.278972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.278982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.279990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.280961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.280972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 15:36:20.281714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 15:36:20.281745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.281933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.281974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.282985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.282996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.283956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.284119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.284149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.284289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.284321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.284518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 15:36:20.284548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 15:36:20.284737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.284769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.284968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.285191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.285222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.285366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.285397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.285611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.285643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.285856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.285886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.286140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.286173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.286298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.286329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.286511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.286542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.286729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.286760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.286872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.286904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.287094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.287147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.287339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.287371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.287551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.287583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.287782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.287814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.288095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.288128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.288250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.288287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.288479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.288511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.288688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.288719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.288830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.288861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.664 [2024-11-20 15:36:20.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.664 [2024-11-20 15:36:20.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.664 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.289274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.289306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.289432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.289463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.289664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.289898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.289929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.290182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.290213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.290335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.290626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.290943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.291119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.291151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.291354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.291387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.291536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.291568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.291895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.291928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.292137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.292168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.292294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.292325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.292460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.292492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.292675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.292706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.292894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.292925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.293124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.293156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.293337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.293369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.293557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.293588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.293830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.293862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.294082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.294287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.294318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.294652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.294684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.294896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.294928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.295199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.295231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.295516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.295548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.295818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.295849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.296140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.296173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.296307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.296338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.296521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.296684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.296715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.296940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.297006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.297140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.297171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.297299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.297331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.665 [2024-11-20 15:36:20.297671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.665 [2024-11-20 15:36:20.297759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.665 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.297990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.298027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.298180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.298211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.298386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.298418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.298742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.298772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.299065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.299334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.299366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.299604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.299635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.299891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.299922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.300199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.300230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.300358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.300388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.300520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.300733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.300940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.300992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.301213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.301244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.301386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.301417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.301556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.301586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.301827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.301858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.302044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.302078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.302266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.302297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.302414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.302445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.302650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.302682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.302898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.303121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.303153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.303329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.303502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.303533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.303663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.303695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.303892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.303923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.304122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.304154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.304365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.304603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.304635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.304747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.304779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.304896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.304926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.305072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.305103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.305343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.305394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.305544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.666 [2024-11-20 15:36:20.305834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.666 [2024-11-20 15:36:20.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.666 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.306115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.306286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.306318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.306454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.306486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.306748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.306784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.306987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.307022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.307172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.307204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.307393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.307611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.307643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.307853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.307884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.308040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.308073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.308210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.308242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.308378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.308409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.308539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.308571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.308739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.308771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.309015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.309049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.309235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.309266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.309389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.309420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.309562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.309594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.309857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.309889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.310099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.310133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.310394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.310426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.310638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.310669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.310964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.311161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.311193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.311334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.311366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.311639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.311671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.311808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.311840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.312044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.312078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.312275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.312307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.312548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.312581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.312846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.312885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.313026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.313059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.313205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.313238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.313437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.313468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.313747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.313779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.313967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.667 qpair failed and we were unable to recover it. 00:27:16.667 [2024-11-20 15:36:20.314269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.667 [2024-11-20 15:36:20.314301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.314489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.314520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.314796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.314827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.315018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.315051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.315229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.315261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.315462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.315494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.315794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.315993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.316026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.316219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.316251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.316424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.316456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.316743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.316775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.316966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.317000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.317150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.317182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.317371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.317402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.317721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.317752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.317964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.317997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.318135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.318168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.318404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.318436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.318635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.318667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.318878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.318909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.319170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.319203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.319328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.319365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.319509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.319540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.319814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.319846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.320086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.320120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.320263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.320295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.320488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.320519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.320781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.320813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.321097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.321129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.321335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.321367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.321558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.321777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.321808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.322065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.322099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.322284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.322315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.322457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.322488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.322610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.322643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.322929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.322969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.668 [2024-11-20 15:36:20.323109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.668 [2024-11-20 15:36:20.323140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.668 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.323393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.323425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.323675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.323706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.323880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.323912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.324060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.324299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.324331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.324522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.324553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.324727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.324759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.325010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.325044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.325236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.325267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.325488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.325521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.325643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.325674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.325861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.325893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.326105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.326332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.326363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.326608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.326639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.326888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.326920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.327125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.327158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.327343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.327375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.327523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.327554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.327746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.327778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.327988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.328022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.328291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.328323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.328545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.328576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.328860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.328892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.329174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.329208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.329349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.329380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.329634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.329666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.329853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.329885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.330063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.330096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.330380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.330412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.330698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.330730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.330985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.669 [2024-11-20 15:36:20.331018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.669 qpair failed and we were unable to recover it. 00:27:16.669 [2024-11-20 15:36:20.331217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.331248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.331491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.331524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.331817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.332003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.332208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.332334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.332365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.332644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.332676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.332928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.332983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.333121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.333337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.333368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.333569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.333601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.333885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.334095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.334128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.334319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.334352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.334664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.334697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.334943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.334986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.335186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.335219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.335415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.335446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.335655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.335686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.335943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.335993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.336219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.336249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.336489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.336521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.336772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.336806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.337098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.337132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.337261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.337292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.337441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.337474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.337739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.337989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.338023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.338249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.338281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.338509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.338541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.338749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.338780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.339050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.339084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.339229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.339352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.339384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.339683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.339716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.339849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.339880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.340065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.340097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.670 [2024-11-20 15:36:20.340340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.670 [2024-11-20 15:36:20.340372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.670 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.340558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.340590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.340773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.340804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.340914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.340968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.341213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.341245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.341421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.341452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.341665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.341969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.342002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.342205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.342237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.342527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.342565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.342834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.342865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.343137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.343170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.343314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.343346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.343533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.343565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.344161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.344194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.344317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.344349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.344602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.344635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.344820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.344852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.345062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.345094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.345301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.345577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.345610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.345754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.345785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.345996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.346029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.346330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.346362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.346551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.346583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.346844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.346876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.347086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.347119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.347256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.347289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.347461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.347702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.348003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.348036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.348232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.348262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.348456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.348489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.348768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.348799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.348988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.349021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.671 [2024-11-20 15:36:20.349218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.671 [2024-11-20 15:36:20.349255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.671 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.349521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.349553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.349824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.350100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.350134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.350338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.350370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.350569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.350600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.350794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.350826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.351096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.351130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.351324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.351355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.351606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.351638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.351878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.351909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.352235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.352270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.352521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.352553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.352763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.352795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.353091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.353164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.353444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.353482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.353736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.353768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.354061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.354094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.354295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.354328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.354445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.354714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.354746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.354884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.354916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.355123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.355288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.355319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.355555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.355587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.355765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.355796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.356073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.356370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.356690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.356721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.356943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.356985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.357179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.357211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.357347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.357376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.357922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.357964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.358100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.358131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.358319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.358596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.672 [2024-11-20 15:36:20.358627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.672 qpair failed and we were unable to recover it. 00:27:16.672 [2024-11-20 15:36:20.358916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.358957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.359276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.359508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.359763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.359794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.360092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.360125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.360401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.360432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.360643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.360674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.360861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.360891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.361184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.361216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.361428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.361459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.361674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.361706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.362002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.362034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.362280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.362311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.362554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.362584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.362817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.362848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.363024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.363056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.363304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.363531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.363834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.363865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.364133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.364166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.364478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.364509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.364783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.364814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.365106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.365139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.365409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.365441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.365755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.365786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.365980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.366012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.366307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.366337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.366587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.366618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.366900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.366931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.367192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.367523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.367560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.367839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.367870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.368153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.368187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.673 [2024-11-20 15:36:20.368445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.673 [2024-11-20 15:36:20.368476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.673 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.368658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.368689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.368936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.368982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.369301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.369333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.369608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.369965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.370239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.370271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.370466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.370498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.370679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.370710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.370920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.370962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.371158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.371189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.371493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.371525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.371719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.371750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.372043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.372076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.372263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.372294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.372482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.372514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.372722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.372754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.373028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.373061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.373373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.373405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.373690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.373907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.374193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.374226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.374752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.374783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.375208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.375427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.375463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.375757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.375789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.376052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.376386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.376418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.376747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.376996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.377029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.377172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.377202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.377397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.377429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.377624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.377656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.377924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.377966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.378220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.674 [2024-11-20 15:36:20.378252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.674 qpair failed and we were unable to recover it. 00:27:16.674 [2024-11-20 15:36:20.378447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.378478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.378753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.378794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.379081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.379381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.379413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.379556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.379586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.379855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.379886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.380175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.380208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.380412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.380442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.380619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.380650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.380927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.380967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.381243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.381273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.381530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.381562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.381828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.381859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.382072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.382105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.382355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.382387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.382691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.382722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.382854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.383170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.383203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.383432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.383651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.383682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.383821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.383852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.384146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.384177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.384447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.384479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.384730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.384763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.385072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.385351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.385637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.385670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.385982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.386013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.386196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.386499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.386531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.386800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.386831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.387067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.387322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.387355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.387623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.387654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.387839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.387871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.675 [2024-11-20 15:36:20.388079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.675 [2024-11-20 15:36:20.388112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.675 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.388394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.388617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.388837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.388869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.389121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.389154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.389405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.389692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.389728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.390023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.390055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.390288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.390319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.390610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.390641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.390922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.390982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.391136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.391167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.391349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.391381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.391645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.391676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.391944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.391987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.392206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.392238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.392509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.392541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.392829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.392861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.393043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.393076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.393219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.393456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.393487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.393801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.393833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.394033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.394259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.394586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.394618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.394852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.394884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.395168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.395200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.395477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.395508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.395708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.395923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.395964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.396257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.396288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.396443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.396474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.396767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.396797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.396973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.397007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.397235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.397508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.397540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.397820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.397852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.676 [2024-11-20 15:36:20.398066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.676 [2024-11-20 15:36:20.398099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.676 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.398291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.398321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.398528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.398561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.398765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.398796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.399019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.399051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.399363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.399580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.399611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.399801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.399832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.400054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.400086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.400288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.400326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.400468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.400499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.400791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.400824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.400966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.400999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.401217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.401249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.401455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.401488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.401783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.402098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.402132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.402340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.402556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.402586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.402835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.402868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.403128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.403162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.403413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.403447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.403698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.403730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.404006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.404042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.404248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.404529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.404561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.404840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.404872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.405125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.405157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.405356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.405388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.405572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.405604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.405855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.405887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.406191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.406224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.406526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.406825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.406856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.407082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.407116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.407317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.407349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.407700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.407777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.407964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.677 [2024-11-20 15:36:20.408002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.677 qpair failed and we were unable to recover it. 00:27:16.677 [2024-11-20 15:36:20.408147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.408181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.408457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.408489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.408690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.408987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.409021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.409508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.409541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.409795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.409827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.410101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.410133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.410327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.410359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.410568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.410600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.410851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.410882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.411136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.411169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.411398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.411431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.411559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.411590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.411797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.411830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.412129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.412163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.412345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.412376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.412556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.412589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.412827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.413108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.413141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.413359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.413391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.413656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.413688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.414001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.414034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.414290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.414321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.414519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.414550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.414850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.414882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.415153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.415185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.415384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.415416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.415673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.415705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.415978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.416011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.416296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.416328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.416603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.416634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.416849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.416880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.678 [2024-11-20 15:36:20.417118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.678 [2024-11-20 15:36:20.417151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.678 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.417353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.417672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.417702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.417995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.418028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.418215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.418247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.418563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.418829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.418862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.419147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.419180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.419430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.419462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.419724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.420030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.420062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.420210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.420242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.420519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.420551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.420813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.420844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.421029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.421062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.421342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.421374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.421628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.421659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.421910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.421941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.422188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.422220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.422430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.422619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.422650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.422827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.422858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.423116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.423149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.423345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.423376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.423575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.423605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.423811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.423841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.424127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.424414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.424446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.424705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.424735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.424925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.424967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.425246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.425279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.425581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.425612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.425879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.426187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.426221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.426483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.426515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.426837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.426869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.427098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.679 [2024-11-20 15:36:20.427280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.679 [2024-11-20 15:36:20.427311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.679 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.427530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.427562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.427813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.427843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.428122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.428440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.428472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.428724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.428754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.428962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.428995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.429217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.429500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.429537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.429666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.429698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.429959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.429992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.430218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.430249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.430434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.430465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.430699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.430926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.430966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.431244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.431275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.431573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.431605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.431824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.431857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.432115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.432148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.432400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.432431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.432762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.433057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.433091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.433364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.433395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.433537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.433568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.433765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.433796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.434074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.434107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.434351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.434661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.434693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.434983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.435015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.435238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.435270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.435548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.435579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.435783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.435813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.436013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.436046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.436282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.436314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.436587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.436618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.436914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.436946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.437271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.680 [2024-11-20 15:36:20.437450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.680 qpair failed and we were unable to recover it. 00:27:16.680 [2024-11-20 15:36:20.437729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.437760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.438019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.438052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.438326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.438567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.438599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.438850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.438882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.439101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.439134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.439392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.439424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.439701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.439734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.440024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.440055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.440357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.440652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.440689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.440934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.440976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.441256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.441288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.441537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.441568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.441835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.441866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.442076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.442110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.442251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.442283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.442559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.442590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.442871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.442903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.443138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.443172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.443390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.443422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.443672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.443704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.444005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.444038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.444240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.444273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.444530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.444562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.444689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.444720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.444973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.445008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.445266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.445298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.445516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.445547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.445756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.445788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.446069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.446103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.446391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.446423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.446643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.446675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.446926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.446968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.447195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.447227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.447464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.447497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.447774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.447806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.681 [2024-11-20 15:36:20.448094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.681 [2024-11-20 15:36:20.448129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.681 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.448350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.448381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.448706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.448738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.448865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.448897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.449084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.449389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.449421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.449702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.449734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.450025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.450059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.450275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.450307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.450579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.450610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.450816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.450848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.451052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.451085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.451289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.451320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.451567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.451605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.451866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.451899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.452191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.452224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.452499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.452530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.452818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.452849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.453150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.453184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.453446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.453477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.453670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.453701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.453981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.454014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.454267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.454300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.454548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.454579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.454847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.454879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.455103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.455135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.455358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.455390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.455661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.455891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.455923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.456198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.456230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.456426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.456458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.456724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.456756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.457032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.457285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.457317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.457626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.457659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.457863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.457894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.458132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.458165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.458345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.682 [2024-11-20 15:36:20.458376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.682 qpair failed and we were unable to recover it. 00:27:16.682 [2024-11-20 15:36:20.458630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.458661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.458869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.459174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.459208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.459403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.459434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.459713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.459746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.459926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.459966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.460173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.460204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.460477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.460508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.460833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.461107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.461139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.461355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.461387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.461657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.461688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.461967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.462001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.462218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.462250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.462430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.462462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.462735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.462772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.463030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.463064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.463266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.463298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.463550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.463581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.463882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.463913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.464225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.464257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.464446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.464477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.464674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.464706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.464979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.465012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.465273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.465305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.465565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.465597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.465848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.465879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.466176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.466209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.466499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.466531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.466845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.466876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.467135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.467168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.467449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.467639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.683 [2024-11-20 15:36:20.467670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.683 qpair failed and we were unable to recover it. 00:27:16.683 [2024-11-20 15:36:20.467919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.468183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.468462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.468493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.468783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.468815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.469094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.469128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.469318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.469350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.469613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.469644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.469903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.469935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.470246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.470278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.470538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.470569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.470871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.470901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.471198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.471232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.471504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.471535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.471672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.471703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.472198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.472483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.472515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.472812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.472843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.472982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.473015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.473273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.473305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.473556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.473587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.473786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.473817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.473991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.474032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.474315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.474347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.474626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.474657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.475003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.475288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.475320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.475571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.475804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.475836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.476045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.476078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.476275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.476307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.476446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.476479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.476732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.476763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.476969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.477002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.477224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.477255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.477530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.477562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.477688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.477720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-11-20 15:36:20.477901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-11-20 15:36:20.477933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.478153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.478185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.478404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.478685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.478716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.478986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.479019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.479271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.479303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.479603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.479634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.479921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.479960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.480167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.480200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.480474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.480505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.480794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.480826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.481082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.481116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.481401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.481432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.481741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.481980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.482015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.482295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.482326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.482609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.482640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.482835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.482867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.483162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.483195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.483463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.483494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.483637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.483669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.483943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.483984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.484129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.484160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.484455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.484487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.484739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.484769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.484900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.484938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.485158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.485191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.485400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.485431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.485654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.485685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.485860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.485893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.486102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.486400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.486431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.486731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.486762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.487034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.487067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.487353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.487385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.487697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.488018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.488298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-11-20 15:36:20.488329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-11-20 15:36:20.488611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.488643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.488929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.488970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.489244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.489275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.489532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.489563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.489869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.489900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.490183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.490216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.490699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.490730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.490972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.491007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.491266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.491298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.491605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.491900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.491931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.492234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.492267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.492531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.492562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.492850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.492881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.493160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.493402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.493434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.493688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.493720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.493986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.494021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.494295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.494327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.494605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.494637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.494924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.494965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.495262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.495294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.495709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.495742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.496017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.496050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.496250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.496281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.496575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.496612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.496925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.497207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.497240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.497526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.497557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.497811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.497843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.498099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.498134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.498432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.498464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.498683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.498715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.498997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.499030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.499313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.499345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-11-20 15:36:20.499623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-11-20 15:36:20.499655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.499943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.499986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.500258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.500290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.500578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.500610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.500821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.500852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.501135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.501168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.501363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.501394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.501524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.501556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.501806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.501837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.502115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.502148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.502349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.502673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.502851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.502882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.503184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.503216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.503516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.503548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.503846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.503878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.504149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.504182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.504438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.504470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.504781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.505037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.505071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.505378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.505658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.505690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.505982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.506015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.506269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.506301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.506597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.506628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.506903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.506935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.507225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.507257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.507513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.507545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.507821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.507853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.508056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.508089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.508293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-11-20 15:36:20.508330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-11-20 15:36:20.508605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.508638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.508866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.509131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.509163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.509463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.509495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.509716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.509747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.509930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.509973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.510234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.510267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.510546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.510576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.510761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.510792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.511047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.511080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.511383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.511415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.511529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.511560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.511868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.511900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.512100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.512134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.512334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.512365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.512612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.512643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.512898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.512930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.513241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.513273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.513533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.513564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.513873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.513904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.514138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.514171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.514440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.514785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.514973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.515007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.515292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.515325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.515638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.515670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.515856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.515888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.516045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.516078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.516385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.516416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.516695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.516726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.517015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.517048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.517303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.517333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.517588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.517621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.517805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.517836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.518015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.518048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.518337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.518368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.518644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.518676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-11-20 15:36:20.518971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-11-20 15:36:20.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.519210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.519241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.519495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.519533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.519827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.519860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.520127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.520160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.520390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.520422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.520598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.520629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.520900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.521150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.521183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.521396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.521429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.521680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.521710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.522011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.522045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.522242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.522274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.522453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.522483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.522754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.522787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.522992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.523026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.523285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.523316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.523525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.523558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.523830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.523862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.524064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.524097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.524375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.524407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.524687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.524718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.525005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.525039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.525318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.525351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.525549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.525580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.525712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.525743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.526017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.526050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.526280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.526313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.526507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.526538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.526737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.526771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.527022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.527335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.527368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.527563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.527596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.527797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.527831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.528085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.528120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.528329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.528363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.528623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.528655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.528966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-11-20 15:36:20.528999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-11-20 15:36:20.529255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.529286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.529585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.529617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.529850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.529881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.530084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.530118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.530320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.530357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.530640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.530672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.530974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.531008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.531220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.531252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.531531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.531563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.531850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.531882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.532089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.532123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.532414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.532446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.532668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.532701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.532965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.532998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.533268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.533300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.533497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.533529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.533664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.533695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.533898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.533930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.534239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.534272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.534466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.534496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.534760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.534792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.535050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.535085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.535311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.535343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.535637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.535668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.535967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.536000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.536271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.536304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.536577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.536609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.536885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.536917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.537209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.537242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.537375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.537407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.537683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.537715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.538000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.538035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.538288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.538319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.538534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.538567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.538678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.538710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.538927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.538971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.539244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-11-20 15:36:20.539275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-11-20 15:36:20.539464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.539496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.539798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.540000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.540032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.540334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.540599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.540632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.540829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.540861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.541111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.541145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.541446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.541484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.541745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.541777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.542073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.542106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.542315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.542348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.542626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.542659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.542913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.542946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.543179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.543210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.543463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.543496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.543750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.543783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.544037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.544373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.544406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.544670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.544701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.544964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.544997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.545254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.545287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.545577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.545609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.545853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.545885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.546153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.546186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.546476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.546724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.547010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.547043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.547191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.547225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.547482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.547514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.547735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.547767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.548038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.548071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.548200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.548231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.548566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.549081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.549116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.549304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.549335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.549597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.549628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-11-20 15:36:20.549819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-11-20 15:36:20.549850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.692 [2024-11-20 15:36:20.550126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-11-20 15:36:20.550159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-11-20 15:36:20.550370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-11-20 15:36:20.550401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-11-20 15:36:20.550540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-11-20 15:36:20.550572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-11-20 15:36:20.550887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-11-20 15:36:20.550919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.551202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.551235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.551550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.551829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.551861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.552102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.552136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.552394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.552426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.552646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.552678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.552888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.552920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.553184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.553216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.553426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.553457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.553655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.553687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.553991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.554025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.554287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.554318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.554573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.554605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.554904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.554935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.555226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.555259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.555513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.555545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.555737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.556021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.556054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.556311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.556342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.556498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.556530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.556790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.557023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.557055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.557327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.557359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.557643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.557676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.557984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.558018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.558275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.558307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.975 qpair failed and we were unable to recover it. 00:27:16.975 [2024-11-20 15:36:20.558592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.975 [2024-11-20 15:36:20.558623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.558835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.558867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.559058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.559091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.559363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.559395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.559682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.559716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.559989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.560190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.560229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.560508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.560540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.560822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.560854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.561140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.561174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.561370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.561402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.561677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.561708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.561968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.562001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.562121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.562154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.562337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.562368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.562506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.562537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.562678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.562712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.562990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.563221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.563252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.563432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.563463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.563682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.563716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.563983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.564017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.564161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.564193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.564385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.564415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.564697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.564732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.565013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.565048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.565305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.565336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.565638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.565670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.565880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.565913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.566210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.566244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.566512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.566547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.566845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.567034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.567069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.567202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.567234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.567509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.567540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.567771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.568405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.976 [2024-11-20 15:36:20.568439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.976 qpair failed and we were unable to recover it. 00:27:16.976 [2024-11-20 15:36:20.568644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.568675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.568925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.568969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.569264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.569545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.569578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.569852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.569883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.570181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.570215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.570485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.570518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.570771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.571104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.571146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.571358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.571391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.571658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.571692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.571945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.571989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.572289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.572322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.572623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.572927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.572971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.573255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.573287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.573564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.573597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.573816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.573847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.574155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.574190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.574450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.574736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.574768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.574990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.575025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.575182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.575214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.575435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.575467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.575683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.575718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.575855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.575886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.576106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.576140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.576398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.576430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.576554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.576586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.576858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.576889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.577161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.577195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.577491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.577522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.577730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.577761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.577942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.578001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.578263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.578297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.578432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.578466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.578650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.578683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.578883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.977 [2024-11-20 15:36:20.578915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.977 qpair failed and we were unable to recover it. 00:27:16.977 [2024-11-20 15:36:20.579214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.579249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.579513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.579547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.579763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.579970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.580004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.580282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.580315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.580438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.580472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.580723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.580755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.581014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.581050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.581340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.581374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.581648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.581682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.581980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.582280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.582313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.582601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.582634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.582893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.583087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.583121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.583317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.583350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.583577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.583610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.583868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.583901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.584175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.584209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.584490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.584522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.584733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.584766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.584960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.584994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.585197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.585230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.585514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.585546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.585752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.585787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.586124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.586326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.586357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.586633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.586664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.586861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.586894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.587109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.587142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.587421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.587453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.587652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.587684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.587887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.587919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.588203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.588236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.588449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.588482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.588733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.588764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.588960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.588993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.978 [2024-11-20 15:36:20.589303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.978 [2024-11-20 15:36:20.589335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.978 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.589635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.589667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.589866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.589899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.590188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.590223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.590407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.590440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.590693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.590724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.590929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.590986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.591123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.591158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.591309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.591343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.591531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.591562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.591837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.591868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.592077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.592111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.592385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.592418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.592563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.592599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.592886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.592918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.593227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.593260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.593545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.593577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.593808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.593841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.594089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.594375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.594409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.594600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.594632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.594885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.594918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.595125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.595158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.595365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.595398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.595688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.595720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.595862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.596108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.596361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.596393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.596584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.596614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.596863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.596897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.597167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.597204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.597353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.597385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.597585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.979 [2024-11-20 15:36:20.597616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.979 qpair failed and we were unable to recover it. 00:27:16.979 [2024-11-20 15:36:20.597880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.597911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.598127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.598161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.598312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.598342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.598473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.598503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.598702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.598733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.599205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.599239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.599498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.599530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.599737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.599768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.599895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.599926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.600168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.600203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.600323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.600353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.600487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.600519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.600729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.600760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.600972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.601007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.601199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.601230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.601435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.601464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.601895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.601926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.602163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.602194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.602390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.602426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.602626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.602659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.602891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.602923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.603152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.603185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.603462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.603495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.603768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.603801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.604012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.604046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.604329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.604362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.604473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.604506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.604811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.605084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.605119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.605320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.605353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.605614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.605646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.605941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.605983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.606254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.606288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.980 qpair failed and we were unable to recover it. 00:27:16.980 [2024-11-20 15:36:20.606568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.980 [2024-11-20 15:36:20.606600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.606883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.606914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.607114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.607148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.607380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.607411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.607634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.607666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.607939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.607983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.608180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.608212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.608550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.608768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.609015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.609049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.609254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.609286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.609488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.609520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.609825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.609857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.610115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.610148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.610463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.610495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.610797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.610831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.611037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.611070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.611265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.611297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.611569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.611833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.611867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.612070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.612104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.612383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.612415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.612640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.612672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.612965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.613000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.613205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.613236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.613415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.613454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.613651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.613683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.613885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.613917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.614201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.614235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.614364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.614398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.614527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.614557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.614757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.614789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.614992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.615026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.615207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.615238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.615489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.615523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.615731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.615764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.616041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.616074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.616216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.981 [2024-11-20 15:36:20.616247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.981 qpair failed and we were unable to recover it. 00:27:16.981 [2024-11-20 15:36:20.616443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.616475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.616746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.616779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.616912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.616941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.617203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.617235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.617374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.617403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.617610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.617641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.617825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.617859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.618063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.618098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.618227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.618261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.618394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.618425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.618560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.618592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.618786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.618817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.619071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.619107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.619260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.619294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.619508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.619539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.619722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.619755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.620030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.620063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.620290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.620451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.620694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.620726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.620924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.620966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.621173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.621207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.621391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.621423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.621627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.621659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.621790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.621822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.622027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.622059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.622234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.622264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.622444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.622674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.622705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.622912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.622943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.623072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.623103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.623248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.623277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.623483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.623515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.623701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.623733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.623934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.623977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.982 [2024-11-20 15:36:20.624160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.982 [2024-11-20 15:36:20.624194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.982 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.624308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.624537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.624569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.624845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.624876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.625053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.625085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.625233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.625264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.625557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.625753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.625784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.626002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.626034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.626287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.626500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.626741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.626879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.627105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.627137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.627369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.627630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.627663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.627781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.627811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.627932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.627971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.628253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.628286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.628478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.628509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.628702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.628733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.628924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.628967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.629104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.629135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.629381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.629413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.629530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.629571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.629761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.629791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.629938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.629998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.630195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.630227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.630498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.630530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.630668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.630699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.630902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.631171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.631203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.631420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.631457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.631651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.631683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.631881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.631912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.632213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.632247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.632439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.983 [2024-11-20 15:36:20.632470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.983 qpair failed and we were unable to recover it. 00:27:16.983 [2024-11-20 15:36:20.632671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.632704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.632920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.632962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.633241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.633274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.633547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.633580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.633777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.633809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.634006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.634040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.634166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.634197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.634423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.634454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.634732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.634765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.634910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.634941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.635143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.635175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.635440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.635472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.635667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.635700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.635899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.635932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.636176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.636208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.636329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.636360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.636501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.636531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.636659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.636690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.636938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.636982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.637107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.637140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.637264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.637296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.637524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.637676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.637708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.637910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.637942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.638106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.638139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.638392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.638425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.638678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.638710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.638999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.639034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.639385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.639417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.639637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.639668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.639964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.640159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.640191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.984 qpair failed and we were unable to recover it. 00:27:16.984 [2024-11-20 15:36:20.640389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.984 [2024-11-20 15:36:20.640420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.640622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.640655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.640874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.640906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.641104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.641143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.641285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.641316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.641462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.641493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.641678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.641710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.641833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.641863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.642093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.642295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.642539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.642571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.642820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.642852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.643120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.643338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.643368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.643511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.643541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.643726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.643760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.644134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.644167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.644367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.644400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.644543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.644574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.644778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.644810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.645084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.645117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.645309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.645340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.645479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.645638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.645668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.645853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.645885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.646158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.646192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.646366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.646399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.646644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.646675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.646809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.646840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.646970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.647004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.647276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.647307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.647408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.647438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.647622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.647655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.647828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.647858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.648035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.985 [2024-11-20 15:36:20.648068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.985 qpair failed and we were unable to recover it. 00:27:16.985 [2024-11-20 15:36:20.648330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.648554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.648729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.648761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.648889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.648921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.649174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.649206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.649484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.649517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.649719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.649859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.649896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.650177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.650209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.650421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.650453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.650645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.650676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.650816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.650847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.651099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.651132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.651334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.651364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.651545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.651575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.651820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.651851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.651975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.652007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.652201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.652233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.652504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.652534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.652725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.652756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.652995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.653028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.653173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.653205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.653448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.653480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.653748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.653780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.653996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.654030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.654235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.654486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.654517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.654819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.655025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.655057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.655201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.655359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.655391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.655699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.655971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.656003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.656263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.656295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.656561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.656594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.656853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.656884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.656992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.657026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.657232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.986 [2024-11-20 15:36:20.657264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.986 qpair failed and we were unable to recover it. 00:27:16.986 [2024-11-20 15:36:20.657481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.657514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.657709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.657740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.657959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.657992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.658132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.658162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.658353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.658385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.658521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.658553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.658693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.658913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.658943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.659236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.659267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.659449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.659485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.659612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.659643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.659940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.660087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.660119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.660242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.660271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.660408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.660438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.660644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.660676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.660864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.660895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.661152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.661379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.661410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.661551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.661582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.661781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.661923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.661960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.662150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.662182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.662390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.662423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.662712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.662744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.662866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.662897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.663040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.663071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.663270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.663301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.663478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.663510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.663615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.663645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.663824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.663858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.664040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.664074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.664348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.664380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.664623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.664655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.664860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.664891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.665090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.665122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.665268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.665300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.987 [2024-11-20 15:36:20.665589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.987 qpair failed and we were unable to recover it. 00:27:16.987 [2024-11-20 15:36:20.665782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.665813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.666061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.666093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.666234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.666490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.666520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.666706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.666736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.666915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.666968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.667188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.667386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.667417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.667688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.667905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.667936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.668233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.668266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.668537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.668575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.668710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.668741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.668858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.669182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.669214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.669409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.669626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.669656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.669853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.669885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.670088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.670119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.670312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.670344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.670587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.670619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.670734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.670769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.671012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.671044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.671183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.671213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.671416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.671447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.671645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.671675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.671806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.671836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.672035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.672068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.672283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.672315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.672554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.672585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.672829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.672860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.672989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.673019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.673207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.673237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.673409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.673439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.673630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.673661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.673942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.673985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.674184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.674217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.988 [2024-11-20 15:36:20.674484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.988 [2024-11-20 15:36:20.674516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.988 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.674804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.674835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.675022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.675055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.675310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.675341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.675532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.675562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.675735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.675765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.676036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.676069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.676264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.676295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.676486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.676517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.676650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.676680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.676962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.676996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.677269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.677301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.677580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.677612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.677803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.677834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.678100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.678380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.678412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.678600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.678632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.678882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.679010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.679046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.679276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.679306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.679572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.679603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.679775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.679806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.679999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.680233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.680263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.680390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.680421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.680621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.680653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.680935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.680979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.681156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.681188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.681477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.681509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.681769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.681801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.681996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.682038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.682212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.682242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.682437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.682468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.682720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.682751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.989 [2024-11-20 15:36:20.683014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.989 [2024-11-20 15:36:20.683047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.989 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.683348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.683381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.683516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.683547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.683811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.683842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.683967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.683999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.684190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.684220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.684456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.684486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.684666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.684708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.684828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.684858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.684998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.685032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.685226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.685257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.685437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.685468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.685667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.685698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.685893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.686123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.686154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.686268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.686300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.686421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.686450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.686718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.686749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.686994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.687049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.687250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.687282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.687405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.687436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.687708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.687740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.687937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.687981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.688154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.688186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.688364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.688394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.688645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.688677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.688786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.688818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.688936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.688977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.689176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.689206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.689410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.689441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.689634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.689665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.689796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.689827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.690095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.690128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.690366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.690490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.690698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.690728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.690993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.691028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.691216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.691248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.990 [2024-11-20 15:36:20.691419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.990 [2024-11-20 15:36:20.691450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.990 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.691569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.691728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.691759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.692059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.692281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.692431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.692577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.692848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.692970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.693001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.693264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.693300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.693477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.693507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.693623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.693652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.693893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.693924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.694122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.694152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.694415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.694445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.694635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.694665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.694966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.695000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.695304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.695334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.695512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.695543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.695727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.695758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.695995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.696027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.696316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.696499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.696529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.696734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.696765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.697055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.697088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.697285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.697317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.697432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.697461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.697585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.697616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.697804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.697835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.698020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.698053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.698251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.698281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.698462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.698491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.698761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.698791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.698915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.698944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.699229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.699260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.699519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.699550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.699804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.699835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.700029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.700060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.991 [2024-11-20 15:36:20.700324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.991 [2024-11-20 15:36:20.700354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.991 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.700595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.700868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.700897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.701192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.701224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.701492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.701800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.701830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.702021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.702053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.702271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.702301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.702443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.702473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.702777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.702808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.703030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.703207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.703245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.703522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.703787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.703817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.704106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.704138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.704341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.704372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.704540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.704571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.704767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.704798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.704979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.705011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.705202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.705231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.705406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.705435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.705701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.705732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.705866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.705896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.706170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.706202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.706478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.706509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.706728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.706759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.707028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.707060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.707279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.707309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.707547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.707578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.707784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.707815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.707997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.708029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.708291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.708321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.708530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.708729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.708760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.709004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.709036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.709245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.992 [2024-11-20 15:36:20.709275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.992 qpair failed and we were unable to recover it. 00:27:16.992 [2024-11-20 15:36:20.709514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.709807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.709838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.710134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.710166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.710364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.710394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.710656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.710686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.710806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.710836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.711034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.711066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.711329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.711360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.711505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.711535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.711809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.711839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.711968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.711999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.712197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.712228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.712490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.712522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.712769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.712800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.712991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.713022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.713295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.713332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.713580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.713611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.713866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.713896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.714093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.714124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.714364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.714395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.714665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.714936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.714975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.715164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.715196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.715459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.715488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.715697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.715728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.715985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.716017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.716197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.716228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.716473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.716505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.716744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.716774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.716974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.717007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.717269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.717299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.717549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.717580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.717773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.717804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.993 [2024-11-20 15:36:20.718064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.993 [2024-11-20 15:36:20.718098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.993 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.718369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.718399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.718644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.718675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.718886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.718917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.719185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.719216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.719458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.719489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.719684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.719714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.719900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.719930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.720202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.720234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.720441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.720471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.720723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.720754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.721015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.721048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.721253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.721283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.721552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.721582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.721780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.721810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.722078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.722110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.722357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.722387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.722654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.722686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.722975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.723008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.723312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.723495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.723527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.723794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.723825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.724036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.724075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.724289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.724320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.724612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.724914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.724945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.725228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.725258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.725440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.725643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.725674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.725902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.725932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.726240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.726271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.726530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.726561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.726848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.726879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.727157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.727189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.727509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.727786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.994 [2024-11-20 15:36:20.727816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.994 qpair failed and we were unable to recover it. 00:27:16.994 [2024-11-20 15:36:20.728121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.728154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.728414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.728444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.728722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.728753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.728996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.729322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.729352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.729617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.729647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.729868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.729899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.730181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.730213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.730492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.730523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.730776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.730807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.731068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.731100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.731346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.731377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.731669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.731699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.731990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.732024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.732327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.732614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.732645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.732924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.732964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.733204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.733234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.733483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.733514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.733690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.733967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.733999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.734218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.734249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.734508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.734538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.734797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.734828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.735072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.735105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.735354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.735384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.735650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.735691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.735978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.736011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.736227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.736258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.736394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.736608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.736910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.736940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.737221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.737253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.737542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.737572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.737848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.737879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.738095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.738127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.738305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.738336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.738508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.995 [2024-11-20 15:36:20.738538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.995 qpair failed and we were unable to recover it. 00:27:16.995 [2024-11-20 15:36:20.738788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.738819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.739069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.739102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.739367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.739399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.739691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.739722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.740000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.740033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.740309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.740339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.740631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.740907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.740938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.741252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.741283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.741536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.741567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.741765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.741795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.741932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.741973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.742265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.742295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.742487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.742518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.742710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.742740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.743014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.743046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.743330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.743360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.743673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.743982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.744015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.744221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.744252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.744524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.744554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.744879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.745153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.745186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.745437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.745467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.745729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.745759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.746055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.746087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.746359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.746681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.746712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.746893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.746930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.747203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.747235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.747439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.747470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.747733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.747764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.996 qpair failed and we were unable to recover it. 00:27:16.996 [2024-11-20 15:36:20.748037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.996 [2024-11-20 15:36:20.748070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.748287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.748317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.748566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.748597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.748794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.748824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.749074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.749106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.749402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.749432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.749720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.749751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.750026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.750058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.750277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.750308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.750528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.750746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.750777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.751037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.751363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.751394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.751685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.751715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.751891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.751921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.752225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.752257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.752583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.752615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.753037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.753070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.753347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.753378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.753652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.753683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.753978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.754010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.754187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.754218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.754407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.754439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.754628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.754659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.754849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.754880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.755079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.755110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.755399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.755430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.755560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.755841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.755871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.756145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.756176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.756469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.756524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.756719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.756750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.757026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.757059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.757349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.757380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.757576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.757607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.757859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.757896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.758135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.758167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.997 qpair failed and we were unable to recover it. 00:27:16.997 [2024-11-20 15:36:20.758303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.997 [2024-11-20 15:36:20.758334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.758916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.759200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.759233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.759516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.759547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.759831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.759862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.760151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.760184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.760384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.760416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.760674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.760705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.760883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.760915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.761203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.761236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.761512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.761543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.761836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.761867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.762089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.762122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.762372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.762590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.762620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.762816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.762848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.763098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.763131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.763345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.763376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.763645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.763675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.763887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.764137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.764170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.764349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.764379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.764635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.764970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.765004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.765284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.765316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.765596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.765920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.765963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.998 [2024-11-20 15:36:20.766166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.998 [2024-11-20 15:36:20.766197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.998 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.766414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.766445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.766646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.766677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.766940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.766983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.767188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.767219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.767471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.767753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.767784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.768085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.768329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.768360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.768660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.768691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.768968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.769007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.769285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.769316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.769591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.769622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.769823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.769854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.770115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.770147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.770480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.770736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.770919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.770958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.771212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.771539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.771570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.771760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.771792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.772090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.772123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.772327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.772359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.772612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.772644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.772906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.773222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.773254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.773386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.773417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.773618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.773650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.773835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.773867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.774105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.774378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.774412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.774655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.774688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.774901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.774932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.775146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.775179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.775435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.775469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.775745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.775777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.776028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.776060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.776254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.776285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:16.999 qpair failed and we were unable to recover it. 00:27:16.999 [2024-11-20 15:36:20.776591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.999 [2024-11-20 15:36:20.776622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.776920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.776962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.777180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.777487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.777519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.777671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.777704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.777964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.777998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.778196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.778227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.778338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.778369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.778624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.778657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.778848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.778879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.779063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.779096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.779376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.779409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.779546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.779584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.779708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.779739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.779962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.779998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.780262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.780295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.780552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.780584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.780788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.780820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.781091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.781125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.781303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.781336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.781609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.781643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.781923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.781965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.782239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.782476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.782507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.782763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.782794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.783046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.783078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.783336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.783368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.783571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.783602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.783886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.783916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.784225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.784260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.784534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.784565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.784830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.784862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.785082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.785118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.785331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.785365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.785635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.785668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.000 [2024-11-20 15:36:20.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.000 qpair failed and we were unable to recover it. 00:27:17.000 [2024-11-20 15:36:20.786204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.786237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.786435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.786466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.786668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.786700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.786983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.787017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.787253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.787287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.787560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.787591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.787878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.788082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.788115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.788390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.788422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.788571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.788603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.788904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.788937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.789185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.789217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.789488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.789521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.789734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.789769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.790028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.790060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.790190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.790223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.790496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.790539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.790789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.790822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.791127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.791161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.791475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.791507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.791704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.791944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.791995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.792213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.792244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.792382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.792413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.792712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.792743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.793015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.793047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.793193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.793224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.793349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.793380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.793492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.793524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.793753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.793785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.794068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.794102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.794307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.794340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.794563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.794595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.794895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.794926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.795229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.795261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.795534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.795566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.001 [2024-11-20 15:36:20.795765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.001 [2024-11-20 15:36:20.795796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.001 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.796047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.796082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.796365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.796399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.796678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.796710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.796977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.797012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.797233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.797264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.797542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.797574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.797848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.797881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.798098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.798131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.798360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.798392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.798641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.798673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.798868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.798901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.799131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.799165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.799438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.799469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.799661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.799693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.799982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.800019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.800164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.800197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.800319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.800349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.800624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.800657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.800909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.800942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.801255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.801294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.801524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.801556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.801810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.801842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.802021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.802054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.802255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.802405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.802437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.802636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.802667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.802863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.802894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.803212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.803353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.803383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.803583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.803617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.803897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.803930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.804217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.804248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.804447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.804480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.804739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.804772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.805023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.805059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.805289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.805323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 15:36:20.805452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.002 [2024-11-20 15:36:20.805485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.805671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.805702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.805896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.805929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.806121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.806153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.806377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.806407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.806600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.806630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.806779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.806808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.807009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.807041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.807163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.807193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.807394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.807422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.807700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.807733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.807989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.808021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.808235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.808266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.808471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.808622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.808650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.808766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.808794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.809072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.809104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.809302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.809332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.809585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.809614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.809816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.809845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.810056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.810087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.810222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.810252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.810390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.810419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.810646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.810676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.810875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.810905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.811194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.811226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.811375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.811405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.811702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.811732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.811877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.811907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.812152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.812290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.812523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.812552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.812830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.812862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.813042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.003 [2024-11-20 15:36:20.813076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 15:36:20.813269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.813444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.813477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.813730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.814043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.814076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.814275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.814306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.814556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.814587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.814802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.814834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.815042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.815074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.815350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.815381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.815529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.815806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.815837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.815976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.816010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.816292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.816324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.816814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.817113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.817146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.817440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.817478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.817770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.817801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.818012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.818045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.818293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.818325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.818473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.818504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.818659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.818691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.818901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.818932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.819250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.819281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.819536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.819568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.819841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.819872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.820054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.820087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.820347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.820602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.820633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.820885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.820917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.821140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.821173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.821383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.821672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.821703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.821904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.821935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.822231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.822263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.822533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.822564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.822777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.822808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.823081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.004 [2024-11-20 15:36:20.823114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 15:36:20.823394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.823426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.823633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.823907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.823939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.824155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.824186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.824381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.824412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.824637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.824668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.824812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.824843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.825101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.825134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.825435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.825639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.825671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.825926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.825967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.826276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.826309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.826508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.826540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.826820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.827152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.827185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.827474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.827506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.827782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.827813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.828072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.828105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.828301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.828339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.828635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.828667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.828884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.828915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.829136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.829169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.829368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.829399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.829623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.829895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.829926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.830137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.830169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.830349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.830379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.830653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.830810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.831159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.831191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.831441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.831473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.831787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.831818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.832077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.832109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.832406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.832438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.832732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.832764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.832945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.832984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.833239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.833417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.005 [2024-11-20 15:36:20.833449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.005 qpair failed and we were unable to recover it. 00:27:17.005 [2024-11-20 15:36:20.833723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.833754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.834036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.834068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.834279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.834311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.834536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.834567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.834857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.834888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.835112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.835144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.835455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.835486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.835767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.835797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.836059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.836092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.836344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.836375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.836483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.836514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.836797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.836828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.837022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.837054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.837316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.837348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.837646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.837676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.837957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.837989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.838240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.838271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.838518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.838549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.838826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.838857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.839067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.839099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.839282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.839564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.839596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.839794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.839825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.840023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.840056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.840256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.840287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.840563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.840594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.840879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.840911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.841125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.841158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.841377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.841408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.841661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.841692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.842005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.842038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.842297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.842328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.842510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.842542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.842837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.842868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.843089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.843121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.843394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.843426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.843700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.843731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.844028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.006 [2024-11-20 15:36:20.844061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.006 qpair failed and we were unable to recover it. 00:27:17.006 [2024-11-20 15:36:20.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.844288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.844433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.844466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.844668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.844699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.844815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.845027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.845059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.845266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.845296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.845583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.845885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.845917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.846189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.846221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.846431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.846462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.846723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.846754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.847052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.847085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.847355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.847683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.847715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.847990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.848023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.848238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.848438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.848469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.848742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.848773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.848913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.848943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.849133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.849164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.849471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.849669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.849701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.849981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.850019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.850314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.850346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.850635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.850666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.850870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.850901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.851129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.851161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.851436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.851467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.851669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.851700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.851977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.852010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.852302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.852584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.852615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.852848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.852879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.852995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.853029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.853325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.853356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.853622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.853652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.007 [2024-11-20 15:36:20.853966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.007 [2024-11-20 15:36:20.854000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.007 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.854271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.854302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.854499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.854530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.854793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.854824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.855008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.855040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.855277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.855503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.855788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.855819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.856085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.856415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.856446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.856706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.856846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.856877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.857150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.857183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.008 [2024-11-20 15:36:20.857466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.008 [2024-11-20 15:36:20.857498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.008 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.857778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.857809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.858095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.858128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.858343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.858374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.858634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.858666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.858890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.858920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.859129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.859162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.859435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.859467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.859715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.859746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.859990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.860024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.860249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.860282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.860537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.860568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.860822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.860853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.861052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.861361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.861393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.861662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.861693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.861887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.861918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.862189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.862222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.862421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.862452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.862718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.862749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.862946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.862986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.863233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.863263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.863488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.863519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.863805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.863836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.864062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.864345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.864376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.864643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.287 [2024-11-20 15:36:20.864675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 15:36:20.864918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.864961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.865170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.865201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.865476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.865507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.865796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.865827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.866044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.866076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.866350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.866382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.866663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.866695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.866987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.867020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.867225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.867257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.867370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.867402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.867675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.867705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.867995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.868029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.868306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.868338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.868538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.868702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.868733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.868967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.869001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.869224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.869474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.869504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.869759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.870016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.870049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.870245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.870276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.870525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.870556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.870755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.870785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.871067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.871099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.871350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.871381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.871640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.871671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.871989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.872286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.872318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.872556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.872588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.872862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.872893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.873182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.873214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.873388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.873642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.873673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.873811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.873841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.874039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.874072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.874341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.874373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.874634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.874665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.288 qpair failed and we were unable to recover it. 00:27:17.288 [2024-11-20 15:36:20.874968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.288 [2024-11-20 15:36:20.875002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.875232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.875264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.875465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.875496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.875774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.875806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.876081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.876300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.876331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.876627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.876901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.876931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.877163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.877195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.877479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.877510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.877758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.877788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.877896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.878226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.878259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.878478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.878678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.878709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.878979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.879012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.879212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.879433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.879463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.879669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.879700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.879986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.880019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.880268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.880300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.880559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.880590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.880789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.880820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.881097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.881408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.881441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.881623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.881941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.881983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.882253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.882285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.882536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.882567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.882815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.882852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.883073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.883107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.883384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.883415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.883698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.883729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.883996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.884028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.884257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.884288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.884591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.884621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.884909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.884940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.885152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.885184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.289 qpair failed and we were unable to recover it. 00:27:17.289 [2024-11-20 15:36:20.885385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.289 [2024-11-20 15:36:20.885416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.885610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.885641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.885935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.885975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.886236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.886268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.886555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.886586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.886870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.886902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.887190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.887225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.887437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.887468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.887720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.887751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.887973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.888007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.888286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.888318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.888600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.888632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.888822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.888853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.889117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.889151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.889353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.889384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.889587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.889618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.889894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.890136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.890168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.890472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.890503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.890701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.890731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.890927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.890967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.891169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.891201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.891345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.891376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.891650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.891681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.891872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.891903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.892174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.892206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.892481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.892513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.892762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.892793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.892997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.893031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.893305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.893534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.893564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.893816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.893853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.894093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.894127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.894402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.894433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.894648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.894679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.894915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.894946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.895234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.895459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.290 [2024-11-20 15:36:20.895490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.290 qpair failed and we were unable to recover it. 00:27:17.290 [2024-11-20 15:36:20.895805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.895837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.896036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.896069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.896251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.896282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.896554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.896585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.896890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.896920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.897214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.897247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.897472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.897503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.897763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.897795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.898074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.898108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.898301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.898332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.898570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.898600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.898798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.898830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.899052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.899085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.899344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.899376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.899620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.899651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.899900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.899932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.900161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.900192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.900374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.900405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.900724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.901033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.901068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.901267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.901300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.901494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.901525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.901726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.901757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.902033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.902066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.902210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.902241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.902492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.902523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.902800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.902831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.903054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.903086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.903280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.903312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.903514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.903809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.903840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.904097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.904131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.291 [2024-11-20 15:36:20.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.291 [2024-11-20 15:36:20.904298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.291 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.904404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.904440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.904659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.905004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.905038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.905338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.905549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.905580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.905842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.905874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.906152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.906186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.906396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.906427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.906573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.906604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.906784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.906814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.907023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.907055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.907191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.907222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.907416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.907451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.907731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.907763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.907990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.908025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.908275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.908306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.908581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.908612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.908882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.908913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.909122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.909154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.909449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.909481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.909782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.909812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.909995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.910028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.910326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.910627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.910658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.910929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.910973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.911277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.911308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.911557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.911589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.911778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.911810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.912105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.912140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.912407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.912439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.912601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.912857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.912889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.913104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.913138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.913338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.913370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.913591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.913623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.913887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.913919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.914202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.914235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.292 [2024-11-20 15:36:20.914536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.292 [2024-11-20 15:36:20.914567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.292 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.914834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.914865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.915102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.915136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.915418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.915456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.915656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.915894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.916140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.916172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.916368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.916400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.916594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.916899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.916930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.917221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.917254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.917532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.917826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.917858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.918110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.918144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.918376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.918608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.918639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.918889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.918920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.919158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.919191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.919338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.919368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.919574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.919606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.919787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.919818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.920019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.920052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.920248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.920279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.920452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.920739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.920772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.921066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.921100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.921371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.921402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.921567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.921599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.921789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.921820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.922108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.922307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.922339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.922539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.922571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.922874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.923093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.923126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.923238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.923268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.923524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.923746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.923777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.924037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.924070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.293 qpair failed and we were unable to recover it. 00:27:17.293 [2024-11-20 15:36:20.924267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.293 [2024-11-20 15:36:20.924299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.924444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.924478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.924733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.924765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.924968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.925001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.925276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.925308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.925620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.925658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.925855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.925886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.926053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.926085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.926341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.926373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.926647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.926680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.926935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.926975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.927265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.927298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.927613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.927646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.927855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.927888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.928201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.928235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.928440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.928472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.928775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.928806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.929072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.929106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.929369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.929401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.929647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.929681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.929823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.929855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.930057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.930090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.930287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.930320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.930543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.930574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.930848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.930880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.931102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.931136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.931363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.931726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.932000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.932033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.932236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.932268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.932448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.932479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.932745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.932777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.933030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.933064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.933291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.933323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.933517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.933548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.933747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.933779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.933983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.934015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.934227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.934260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.294 qpair failed and we were unable to recover it. 00:27:17.294 [2024-11-20 15:36:20.934389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.294 [2024-11-20 15:36:20.934419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.934646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.934677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.934878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.934909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.935145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.935177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.935382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.935413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.935560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.935591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.935774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.935806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.936074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.936113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.936393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.936425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.936733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.936764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.936972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.937005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.937234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.937266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.937470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.937502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.937763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.937795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.938091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.938125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.938259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.938290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.938566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.938596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.938833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.938865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.939078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.939111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.939391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.939423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.939643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.939674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.939991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.940024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.940282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.940313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.940570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.940602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.940905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.941100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.941131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.941327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.941358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.941647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.941680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.941967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.942000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.942189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.942220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.942472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.942504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.942701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.942733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.942913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.942944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.943158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.943190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.943453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.943484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.943785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.943817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.944088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.944121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.944278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.944310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.295 qpair failed and we were unable to recover it. 00:27:17.295 [2024-11-20 15:36:20.944509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.295 [2024-11-20 15:36:20.944541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.944757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.944788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.945062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.945095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.945299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.945330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.945526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.945557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.945864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.945896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.946187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.946220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.946339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.946370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.946641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.946672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.946901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.946940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.947133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.947165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.947467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.947498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.947767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.947799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.948053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.948086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.948296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.948328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.948481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.948512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.948797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.948828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.949089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.949122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.949254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.949286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.949417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.949447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.949720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.949751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.950031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.950280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.950311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.950594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.950625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.950767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.950798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.951074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.951108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.951319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.951351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.951631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.951662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.951859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.951890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.952124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.952328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.952543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.952574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.952846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.952878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.953174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.953207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.296 qpair failed and we were unable to recover it. 00:27:17.296 [2024-11-20 15:36:20.953433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.296 [2024-11-20 15:36:20.953464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.953663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.953693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.953963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.954002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.954260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.954292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.954569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.954600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.955128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.955160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.955376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.955407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.955657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.955688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.955938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.956203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.956235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.956478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.956705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.956736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.957036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.957069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.957337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.957367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.957578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.957610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.957842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.958113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.958146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.958353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.958384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.958687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.958719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.958860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.958891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.959109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.959141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.959365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.959396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.959594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.959625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.959887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.959918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.960216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.960248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.960520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.960551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.960752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.960784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.960902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.961151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.961184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.961485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.961516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.961783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.961814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.962102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.962135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.962418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.962450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.962728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.962759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.963071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.963105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.963361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.963393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.963598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.297 [2024-11-20 15:36:20.963629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.297 qpair failed and we were unable to recover it. 00:27:17.297 [2024-11-20 15:36:20.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.963854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.964134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.964446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.964477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.964787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.964819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.965089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.965128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.965318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.965349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.965573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.965605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.965782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.966006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.966040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.966323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.966354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.966610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.966641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.966923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.967231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.967263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.967474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.967506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.967650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.967681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.967932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.967989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.968191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.968222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.968441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.968473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.968599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.968631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.968907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.968937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.969143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.969175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.969307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.969337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.969539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.969571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.969694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.969726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.969869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.969900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.970144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.970177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.970381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.970411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.970533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.970563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.970689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.970721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.970864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.970894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.971025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.971252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.971284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.971410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.971440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.971646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.971856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.971886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.972138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.972386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.972417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.972551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.298 [2024-11-20 15:36:20.972583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.298 qpair failed and we were unable to recover it. 00:27:17.298 [2024-11-20 15:36:20.972725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.972755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.973004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.973037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.973170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.973498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.973530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.973798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.973830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.974100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.974134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.974270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.974306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.974565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.974597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.974721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.974751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.974933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.974974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.975102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.975272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.975302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.975487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.975518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.975783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.975814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.976033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.976066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.976330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.976362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.976564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.976594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.976699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.976730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.976926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.976968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.977277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.977309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.977579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.977611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.977882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.977913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.978122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.978154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.978412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.978444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.978625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.978655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.978849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.978881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.979157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.979457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.979489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.979689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.979720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.979971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.980004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.980208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.980240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.980397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.980697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.980727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.980878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.980909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.981168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.981201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.981479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.981511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.981806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.981837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.982019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.299 [2024-11-20 15:36:20.982051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.299 qpair failed and we were unable to recover it. 00:27:17.299 [2024-11-20 15:36:20.982242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.982274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.982476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.982507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.982635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.982666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.982861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.982892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.983391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.983422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.983688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.983719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.983915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.983946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.984157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.984195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.984451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.984483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.984680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.984711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.984939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.985162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.985195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.985336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.985368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.985548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.985579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.985707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.985738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.985848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.985880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.986076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.986109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.986290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.986320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.986535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.986565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.986745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.986777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.986915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.987108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.987139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.987359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.987389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.987659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.987691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.987874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.987905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.988335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.988367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.988639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.988670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.988801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.988832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.989032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.989065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.989247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.989279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.989472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.989502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.989633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.989664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.989921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.300 [2024-11-20 15:36:20.989961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.300 qpair failed and we were unable to recover it. 00:27:17.300 [2024-11-20 15:36:20.990160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.990192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.990404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.990435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.990616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.990647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.990770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.990801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.990931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.990971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.991169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.991200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.991352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.991385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.991496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.991526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.991717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.991749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.991994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.992027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.992254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.992287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.992485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.992517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.992712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.992744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.992866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.992903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.993040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.993073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.993323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.993354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.993634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.993665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.993886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.993918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.994146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.994179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.994315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.994345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.994521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.994551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.994668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.994699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.994971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.995006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.995136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.995167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.995418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.995450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.995702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.995734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.995986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.996021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.996163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.996195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.996410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.996441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.996617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.996647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.996850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.996880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.997059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.997091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.997274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.997305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.997430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.997459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.997603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.997635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.997902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.997933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.998224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.301 [2024-11-20 15:36:20.998256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.301 qpair failed and we were unable to recover it. 00:27:17.301 [2024-11-20 15:36:20.998530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:20.998758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.998789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:20.998932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.998974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:20.999168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.999200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:20.999470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.999500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:20.999756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:20.999787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.000036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.000069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.000262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.000293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.000435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.000467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.000672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.000703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.000913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.000943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.001226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.001258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.001446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.001478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.001621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.001651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.001849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.001880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.002047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.002080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.002326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.002363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.002491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.002522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.002821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.002851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.003100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.003132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.003356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.003386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.003586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.003617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.003748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.004081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.004251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.004473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.004627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.004845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.004969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.005001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.005253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.005285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.005479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.005510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.005707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.005941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.005982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.006189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.006222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.006543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.006573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.006766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.006797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.006985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.007018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.007212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.302 [2024-11-20 15:36:21.007244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.302 qpair failed and we were unable to recover it. 00:27:17.302 [2024-11-20 15:36:21.007541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.007572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.007748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.007779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.008045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.008078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.008363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.008395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.008517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.008548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.008750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.008781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.009052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.009085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.009208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.009239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.009420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.009450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.009572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.009806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.009838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.010088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.010121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.010331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.010372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.010644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.010675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.010848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.010878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.011024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.011055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.011232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.011263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.011452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.011483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.011680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.011716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.011844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.011874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.012064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.012097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.012374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.012405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.012671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.012702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.012882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.012912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.013040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.013072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.013336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.013367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.013489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.013518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.013791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.013822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.013961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.013992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.014099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.014130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.014234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.014264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.014478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.014509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.014658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.014689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.014880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.014911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.015170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.015203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.015404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.015435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.015622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.015653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.015773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.303 [2024-11-20 15:36:21.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.303 qpair failed and we were unable to recover it. 00:27:17.303 [2024-11-20 15:36:21.016034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.016066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.016264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.016295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.016536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.016565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.016793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.017014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.017047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.017263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.017294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.017505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.017803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.017834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.018114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.018146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.018270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.018301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.018513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.018543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.018680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.018710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.018886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.018917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.019078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.019356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.019387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.019559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.019589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.019781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.019811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.020075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.020107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.020320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.020349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.020506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.020536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.020822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.020859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.021139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.021172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.021346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.021377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.021569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.021599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.021843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.021874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.022072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.022104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.022366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.022397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.022584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.022615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.022722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.022751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.022862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.022894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.023090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.023122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.023317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.023348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.023538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.304 [2024-11-20 15:36:21.023567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.304 qpair failed and we were unable to recover it. 00:27:17.304 [2024-11-20 15:36:21.023749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.023778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.023980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.024012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.024275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.024306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.024518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.024549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.024722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.024943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.024997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.025151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.025184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.025429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.025461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.025646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.025676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.025854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.025883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.026898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.026928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.027137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.027168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.027384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.027414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.027616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.027647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.027786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.027816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.028079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.028111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.028358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.028388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.028592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.028862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.028892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.029087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.029118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.029304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.029334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.029541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.029572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.029731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.029925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.029965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.030183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.030214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.030391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.030421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.030624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.030654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.305 [2024-11-20 15:36:21.030836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.305 [2024-11-20 15:36:21.030866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.305 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.031056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.031357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.031389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.031511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.031541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.031756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.031787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.031975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.032007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.032180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.032211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.032331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.032360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.032536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.032566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.032778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.032811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.032996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.033028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.033216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.033247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.033435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.033465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.033708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.033739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.033921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.033962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.034086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.034117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.034292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.034323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.034506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.034537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.034803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.034834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.034968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.035000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.035135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.035166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.035357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.035388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.035567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.035643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.035890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.036958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.036990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.037264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.037295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.037447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.037478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.037743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.037774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.306 qpair failed and we were unable to recover it. 00:27:17.306 [2024-11-20 15:36:21.037972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.306 [2024-11-20 15:36:21.038006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.038199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.038231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.038359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.038400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.038595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.038627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.038918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.038959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.039071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.039103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.039347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.039378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.039625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.039655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.039854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.039886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.040073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.040107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.040235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.040266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.040491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.040734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.040767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.040969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.041003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.041177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.041208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.041346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.041378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.041566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.041598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.041841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.041873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.042063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.042096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.042219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.042254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.042431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.042463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.042709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.042742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.042934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.042978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.043158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.043190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.043330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.043362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.043536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.043566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.043756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.043787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.043971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.044214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.044246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.044420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.044493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.044704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.044739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.044850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.044880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.045067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.045499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.045530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.045653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.045687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.045810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.045840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.046031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.307 [2024-11-20 15:36:21.046065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.307 qpair failed and we were unable to recover it. 00:27:17.307 [2024-11-20 15:36:21.046187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.046220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.046353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.046385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.046540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.046653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.046685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.046811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.046852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.047072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.047240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.047442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.047651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.047878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.047992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.048027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.048280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.048311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.048556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.048588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.048845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.048878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.049086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.049121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.049312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.049344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.049535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.049785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.049919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.049962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.050074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.050105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.050311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.050343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.050589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.050883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.050915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.051167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.051201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.051340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.051372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.051612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.051643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.051834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.052054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.052088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.052311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.052342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.052486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.052517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.052802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.053002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.053038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.053296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.053328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.308 [2024-11-20 15:36:21.053447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.308 [2024-11-20 15:36:21.053479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.308 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.053653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.053865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.054147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.054179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.054370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.054402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.054600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.054633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.054769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.054801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.054986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.055019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.055261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.055292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.055538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.055570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.055788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.055819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.056049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.056089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.056357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.056388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.056600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.056631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.056767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.056798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.056985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.057019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.057142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.057416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.057449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.057693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.057725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.057850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.057881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.058131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.058424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.058456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.058723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.058755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.058995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.059028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.059218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.059250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.059521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.059554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.059733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.059765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.059960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.059994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.060186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.060217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.060340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.060374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.060544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.060576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.060778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.060809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.061055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.061089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.061260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.061292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.061463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.061495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.061720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.061752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.062012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.062046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.062164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.062196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.309 [2024-11-20 15:36:21.062423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.309 [2024-11-20 15:36:21.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.309 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.062732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.062803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.063048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.063091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.063387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.063421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.063613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.063646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.063768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.063801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.063987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.064289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.064325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.064501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.064539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.064738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.064771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.064944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.064990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.065246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.065278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.065452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.065485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.065671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.065703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.065978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.066012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.066191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.066223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.066408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.066441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.066614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.066646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.066838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.066870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.067112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.067147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.067318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.067350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.067620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.067654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.067921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.067970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.068143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.068178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.068432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.068464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.068638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.068914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.068960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.069205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.069250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.069436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.069596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.310 [2024-11-20 15:36:21.069628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.310 qpair failed and we were unable to recover it. 00:27:17.310 [2024-11-20 15:36:21.069884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.069917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.070141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.070175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.070409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.070442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.070600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.070759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.070970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.071183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.071383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.071598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.071759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.071909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.071941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.072259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.072528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.072730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.072763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.072930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.072980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.073258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.073291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.073550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.073581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.073835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.073866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.074078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.074112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.074306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.074338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.074524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.074557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.074810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.074842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.075049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.075083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.075277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.075310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.075450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.075489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.075607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.075898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.075932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.076163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.076196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.076434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.076467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.076659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.076690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.076956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.076990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.077103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.077135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.077304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.077336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.077799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.077832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.078072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.078112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.078245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.078277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.078462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.311 [2024-11-20 15:36:21.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.311 qpair failed and we were unable to recover it. 00:27:17.311 [2024-11-20 15:36:21.078674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.078708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.078981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.079154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.079186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.079443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.079477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.079654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.079688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.079869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.079901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.080205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.080237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.080358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.080390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.080642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.080673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.080846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.080877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.081130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.081163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.081355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.081387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.081623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.081887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.081919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.082125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.082243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.082278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.082451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.082481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.082654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.082685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.082869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.082900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.083079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.083112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.083233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.083265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.083464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.083495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.083738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.083770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.084044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.084079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.084213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.084246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.084355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.084395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.084625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.084821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.084855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.085035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.085069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.085254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.085288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.085391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.085423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.085632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.085663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.085865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.085898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.086117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.086151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.086323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.086537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.086570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.086706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.086738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.086925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.312 [2024-11-20 15:36:21.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.312 qpair failed and we were unable to recover it. 00:27:17.312 [2024-11-20 15:36:21.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.087211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.087382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.087412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.087585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.087617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.087753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.087784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.087901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.087934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.088066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.088277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.088308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.088545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.088580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.088766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.088798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.089037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.089311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.089343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.089600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.089632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.089730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.089761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.089981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.090085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.090116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.090316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.090347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.090619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.090788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.090819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.091923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.091964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.092196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.092345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.092376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.092545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.092578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.092704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.092735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.092943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.092987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.093169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.093201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.093381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.093412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.093601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.093634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.093805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.093836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.094085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.094118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.094242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.094274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.094407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.094439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.094576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.094607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.094848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.313 [2024-11-20 15:36:21.094881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.313 qpair failed and we were unable to recover it. 00:27:17.313 [2024-11-20 15:36:21.095148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.095181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.095314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.095345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.095523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.095554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.095770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.095800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.096035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.096221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.096261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.096452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.096484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.096603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.096636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.096818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.096850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.097019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.097239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.097272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.097489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.097734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.097767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.097974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.098008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.098190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.098222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.098339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.098370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.098606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.098637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.098746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.098777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.099010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.099045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.099238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.099271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.099529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.099559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.099746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.099778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.099895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.099927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.100059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.100092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.100217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.100249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.100419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.100450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.100569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.100600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.100797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.100829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.101115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.101147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.101333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.101366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.101545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.101576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.101701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.101732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.101858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.101896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.102204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.102242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.102378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.102411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.102593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.102807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.102839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.314 [2024-11-20 15:36:21.102969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.314 [2024-11-20 15:36:21.103003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.314 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.103245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.103448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.103550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.103582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.103699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.103731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.103905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.103937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.104194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.104225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.104416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.104447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.104654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.104686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.104876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.104908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.105059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.105093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.105308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.105340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.105511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.105544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.105725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.105757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.105857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.105890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.106011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.106043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.106224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.106256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.106409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.106646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.106678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.106851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.106883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.107002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.107035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.107226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.107257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.107426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.107457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.107645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.107882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.107915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.108182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.108214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.108396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.108624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.108656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.108863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.108894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.109016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.109049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.109336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.109367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.109537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.109569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.109750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.109781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.110026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.110059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.315 qpair failed and we were unable to recover it. 00:27:17.315 [2024-11-20 15:36:21.110321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.315 [2024-11-20 15:36:21.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.110525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.110556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.110728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.110872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.110904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.111172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.111205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.111443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.111475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.111605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.111636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.111870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.111902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.112182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.112345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.112377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.112557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.112588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.112696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.112727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.112852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.112884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.113059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.113092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.113247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.113433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.113463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.113662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.113693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.113834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.113864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.114103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.114135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.114318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.114349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.114528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.114559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.114672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.114702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.114994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.115029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.115178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.115210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.115388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.115419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.115671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.115702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.115867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.115899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.116072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.116104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.116288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.116320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.116554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.116590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.116772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.116802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.117092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.117126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.117318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.117349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.117449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.117479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.117650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.117681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.117874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.117908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.118087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.118121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.118245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.118283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.118452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.316 [2024-11-20 15:36:21.118485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.316 qpair failed and we were unable to recover it. 00:27:17.316 [2024-11-20 15:36:21.118620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.118657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.118844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.118876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.119061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.119093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.119276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.119308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.119483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.119514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.119732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.119763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.119933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.119974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.120140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.120407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.120439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.120640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.120670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.120854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.120885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.121057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.121091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.121275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.121306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.121492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.121523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.121661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.121692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.121901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.121933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.122114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.122145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.122251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.122287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.122412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.122443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.122633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.122664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.122921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.122971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.123153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.123185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.123355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.123387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.123508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.123538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.123723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.123755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.123856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.123887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.124080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.124112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.124276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.124307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.124518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.124549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.124669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.124701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.125139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.125171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.125362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.125393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.125602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.125633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.125840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.125871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.126010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.126043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.126242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.126273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.126391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.317 [2024-11-20 15:36:21.126423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.317 qpair failed and we were unable to recover it. 00:27:17.317 [2024-11-20 15:36:21.126616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.126827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.126864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.127036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.127068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.127202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.127232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.127364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.127395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.127576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.127606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.127782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.128039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.128070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.128199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.128232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.128419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.128450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.128700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.128730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.128911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.128955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.129199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.129232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.129367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.129595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.129626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.129863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.129894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.130074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.130106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.130240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.130421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.130453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.130582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.130612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.130805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.130835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.131086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.131120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.131249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.131281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.131403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.131434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.131693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.131724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.131895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.131926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.132063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.132096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.132253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.132357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.132387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.132622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.132654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.132891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.132923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.133172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.133205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.133377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.133409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.133625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.133657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.133899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.133931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.134128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.134160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.134342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.134372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.318 [2024-11-20 15:36:21.134582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.318 qpair failed and we were unable to recover it. 00:27:17.318 [2024-11-20 15:36:21.134752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.134786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.134993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.135027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.135264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.135297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.135471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.135502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.135792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.136010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.136043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.136165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.136196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.136568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.136599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.136874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.137117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.137149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.137398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.137431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.137625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.137656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.137795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.137827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.137939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.137979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.138290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.138402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.138433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.138615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.138646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.138818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.138850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.139122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.139155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.139297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.139328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.139449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.139482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.139603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.139640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.139783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.139814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.140277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.140307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.140414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.140446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.140555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.140588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.140765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.140798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.141037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.141069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.141203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.141237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.141407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.141439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.141675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.141706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.141838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.319 [2024-11-20 15:36:21.141871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.319 qpair failed and we were unable to recover it. 00:27:17.319 [2024-11-20 15:36:21.142058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.142090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.142342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.142373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.142556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.142594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.142707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.142738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.143003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.143298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.143329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.143479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.143653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.143684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.143920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.143958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.144244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.144276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.144455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.144664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.144695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.144832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.144863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.145052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.145084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.145268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.145299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.145484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.145515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.145803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.145835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.146064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.146096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.146287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.146319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.146538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.146569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.146745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.146776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.146972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.147005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.147125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.147156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.147326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.147357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.147550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.147581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.147847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.147877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.148083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.148332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.148363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.148486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.148517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.148698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.148735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.148840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.148871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.149118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.149335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.149497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.149640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.149797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.149974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.150006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.150184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.320 [2024-11-20 15:36:21.150216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.320 qpair failed and we were unable to recover it. 00:27:17.320 [2024-11-20 15:36:21.150404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.150435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.150669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.150701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.150872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.150903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.151106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.151138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.151317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.151347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.151530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.151561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.151767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.151928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.151971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.152091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.152122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.152416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.152531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.152673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.152705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.152965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.152999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.153145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.153250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.153281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.153448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.153480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.153648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.153680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.153792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.154855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.154886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.155091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.155123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.155310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.155342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.155467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.155498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.155603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.155633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.155827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.155858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.156038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.156330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.156361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.156467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.156498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.156725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.156906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.156936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.157081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.157358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.157389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.157523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.157555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.157745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.157775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.321 [2024-11-20 15:36:21.157940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.321 [2024-11-20 15:36:21.157981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.321 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.158219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.158250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.158397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.158428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.158543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.158575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.158758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.158788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.158993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.159027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.159232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.159354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.159572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.159604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.159842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.159874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.160046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.160080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.160316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.160346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.160464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.160496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.160667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.160698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.160804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.160835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.161077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.161110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.161292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.161323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.161435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.161466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.161579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.161610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.161733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.161764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.162025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.162313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.162350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.162584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.162616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.162731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.162762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.162871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.162901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.163105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.163137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.163388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.163419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.163525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.163556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.163727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.163976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.164188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.164387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.164520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.164680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.164890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.164921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.165094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.165126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.165363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.165393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.165517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.165549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.322 [2024-11-20 15:36:21.165716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.322 [2024-11-20 15:36:21.165746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.322 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.165936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.165977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.166146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.166177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.166364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.166394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.166574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.166604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.166809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.166839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.167026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.167058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.167172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.167203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.167447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.167478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.167735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.167766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.167982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.168123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.168392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.168600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.168748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.168898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.168928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.169218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.169250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.169367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.169659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.169689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.169806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.169837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.169974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.170006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.170203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.170235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.170354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.170385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.170588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.170618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.170886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.170917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.171105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184faf0 is same with the state(6) to be set 00:27:17.323 [2024-11-20 15:36:21.171469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.171540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.171669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.171704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.171839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.171870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.171993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.172028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.172532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.172562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.172684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.172715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.172963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.172995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.173210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.173242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.323 [2024-11-20 15:36:21.173423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.323 [2024-11-20 15:36:21.173455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.323 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.173827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.173857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.173990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.174229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.174443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.174657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.174804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.174937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.174978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.175147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.175177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.175387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.175559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.175590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.175774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.175804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.176068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.176100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.176304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.176335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.176625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.176655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.176796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.176828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.177013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.177045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.177312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.177342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.177576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.177786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.177816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.178074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.178107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.178341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.178372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.178534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.178565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.178827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.178858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.179150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.179361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.179393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.179789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.179820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.179991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.180023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.180159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.180190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.180373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.180404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.180597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.180628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.180753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.180784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.180994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.181029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.181134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.181166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.181403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.181434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.181699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.181730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.181909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.181939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.182235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.182415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.182445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.182566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.182597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.182785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.183061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.183093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.183264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.183295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.183505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.183536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.183704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.183735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.183894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.184007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.184040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.184248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.184278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.184451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.184482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.184668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.184700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.616 [2024-11-20 15:36:21.184810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.616 [2024-11-20 15:36:21.184841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.616 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.185102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.185135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.185399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.185431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.185664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.185695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.185876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.185912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.186107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.186139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.186418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.186449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.186690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.186721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.186967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.186998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.187179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.187211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.187350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.187381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.187501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.187533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.187770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.187801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.188039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.188071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.188197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.188227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.188433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.188464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.188655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.188687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.188900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.188932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.189138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.189170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.189411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.189442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.189558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.189591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.189853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.189884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.190137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.190170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.190407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.190676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.190707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.190877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.190909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.191107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.191141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.191362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.191543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.191574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.191831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.191861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.192068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.192099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.192292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.192323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.192591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.192621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.192885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.192916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.193096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.193127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.193256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.193286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.193552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.193584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.193684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.193715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.193960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.193992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.194230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.194261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.194440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.194471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.194706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.194736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.194930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.195142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.195173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.195466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.195603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.195634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.195765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.195795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.195927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.195967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.196235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.196450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.196480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.196721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.196752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.196856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.196885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.196999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.197033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.197141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.617 [2024-11-20 15:36:21.197172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.617 qpair failed and we were unable to recover it. 00:27:17.617 [2024-11-20 15:36:21.197361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.197393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.197580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.197610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.197728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.197757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.198946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.198998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.199180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.199211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.199449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.199480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.199659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.199689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.199872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.199902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.200059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.200259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.200393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.200685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.200837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.200994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.201027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.201244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.201275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.201447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.201477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.201593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.201624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.201818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.201849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.202896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.202926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.203191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.203224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.203428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.203465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.203731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.203763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.204007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.204039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.204276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.204308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.204431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.204462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.204672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.204854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.204884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.205093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.205125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.205385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.205677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.205708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.205915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.205956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.206166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.206197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.206408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.206670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.206854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.206885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.207080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.207113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.207355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.207385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.207521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.207551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.207726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.207757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.207995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.208028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.208145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.208177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.208454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.208484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.208655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.208686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.618 qpair failed and we were unable to recover it. 00:27:17.618 [2024-11-20 15:36:21.208813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.618 [2024-11-20 15:36:21.208844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.209053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.209084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.209253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.209284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.209458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.209489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.209664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.209694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.209825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.209855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.210904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.210936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.211839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.211874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.212046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.212077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.212260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.212547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.212577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.212777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.212807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.213075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.213107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.213369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.213401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.213570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.213599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.213785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.213815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.214017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.214048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.214235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.214266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.214386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.214415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.214527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.214558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.214729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.214758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.215891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.215920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.216150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.216221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.216367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.216401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.216593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.216625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.216867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.216900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.217117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.217150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.217418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.217615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.217646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.217895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.217926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.218067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.218097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.218283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.218313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.218548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.619 [2024-11-20 15:36:21.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.619 qpair failed and we were unable to recover it. 00:27:17.619 [2024-11-20 15:36:21.218689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.218719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.218965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.218998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.219185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.219453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.219484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.219720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.219750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.219924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.219965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.220224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.220436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.220719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.220750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.220872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.220909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.221054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.221086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.221261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.221292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.221425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.221633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.221867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.221898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.222031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.222062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.222286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.222317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.222483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.222514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.222772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.222803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.223000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.223033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.223221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.223252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.223419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.223449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.223682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.223712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.223846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.223876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.224060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.224092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.224262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.224293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.224561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.224593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.224715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.224744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.224931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.224969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.225175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.225314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.225344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.225613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.225827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.225858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.226069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.226184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.226214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.226473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.226503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.226752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.226783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.227020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.227052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.227255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.227286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.227469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.227501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.227686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.227716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.227827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.228040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.228194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.228225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.228403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.228435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.228620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.228651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.228895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.228926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.229141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.229171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.229356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.229386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.229512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.229549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.229761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.229791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.229922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.230100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.230377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.620 [2024-11-20 15:36:21.230408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.620 qpair failed and we were unable to recover it. 00:27:17.620 [2024-11-20 15:36:21.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.230699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.230833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.230864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.230988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.231153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.231368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.231583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.231724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.231921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.231961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.232203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.232233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.232449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.232673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.232933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.232971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.233187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.233217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.233388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.233418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.233642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.233810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.233841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.234048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.234079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.234191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.234221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.234521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.234551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.234678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.234710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.234832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.234862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.235052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.235085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.235277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.235307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.235487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.235517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.235701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.235732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.235917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.235946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.236959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.236989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.237180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.237209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.237468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.237498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.237663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.237693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.237796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.237832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.237957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.237989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.238226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.238255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.238397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.238427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.238663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.238694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.238866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.238896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.239107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.239139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.239308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.239339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.239622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.239652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.239869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.240100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.240132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.240304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.240334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.240502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.240531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.240719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.240749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.240886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.240917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.241190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.241222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.241449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.241652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.621 [2024-11-20 15:36:21.241684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.621 qpair failed and we were unable to recover it. 00:27:17.621 [2024-11-20 15:36:21.241852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.241881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.242057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.242088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.242370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.242401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.242529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.242560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.242748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.242778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.243042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.243073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.243313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.243345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.243587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.243617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.243811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.243842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.244019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.244051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.244260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.244290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.244499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.244529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.244713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.245032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.245201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.245467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.245600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.245752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.245975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.246007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.246194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.246225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.246333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.246362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.246532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.246562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.246742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.246778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.246988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.247022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.247144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.247366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.247396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.247661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.247917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.247958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.248139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.248168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.248343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.248372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.248574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.248606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.248840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.248871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.249001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.249034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.249226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.249519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.249549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.249739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.249770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.250063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.250095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.250275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.250304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.250541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.250725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.250756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.250993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.251026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.251265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.251296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.251476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.251507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.251692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.622 [2024-11-20 15:36:21.251723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.622 qpair failed and we were unable to recover it. 00:27:17.622 [2024-11-20 15:36:21.251904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.251934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.252144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.252326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.252357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.252576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.252783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.252958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.252991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.253169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.253200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.253415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.253596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.253626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.253816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.253846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.254037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.254069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.254256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.254286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.254547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.254577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.254710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.254739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.254944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.255210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.255241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.255500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.255531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.255852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.255888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.256068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.256100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.256228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.256257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.256514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.256546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.256732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.256762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.257023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.257055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.257260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.257291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.257516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.257547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.257781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.257812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.257982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.258013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.258137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.258167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.258452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.258483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.258663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.258693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.258860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.258890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.259102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.259134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.259318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.259348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.259475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.259506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.259685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.259716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.259945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.259986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.260159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.260189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.260365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.260395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.260577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.260607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.260710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.260740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.260863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.260893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.261085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.261117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.261254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.261284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.261398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.261428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.261569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.261599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.261807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.261838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.262019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.262051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.262175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.262204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.262326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.262356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.262497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.262528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.262796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.262826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.623 qpair failed and we were unable to recover it. 00:27:17.623 [2024-11-20 15:36:21.263012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.623 [2024-11-20 15:36:21.263045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.263309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.263339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.263524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.263555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.263734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.263764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.263936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.263977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.264213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.264244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.264480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.264515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.264643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.264673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.264867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.264896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.265141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.265172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.265302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.265333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.265500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.265531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.265736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.265767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.265935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.266170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.266416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.266446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.266633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.266663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.266835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.267036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.267068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.267303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.267333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.267520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.267551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.267810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.267841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.267966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.268127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.268157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.268445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.268625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.268656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.268775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.268804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.269065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.269098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.269293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.269324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.269507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.269536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.269725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.269755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.269967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.270000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.270253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.270284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.270499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.270531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.270720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.270751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.270970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.271003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.271240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.271269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.271447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.271475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.271653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.271682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.271904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.272046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.272077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.272342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.272373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.272501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.272532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.272798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.272828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.273094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.273125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.273331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.273362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.273863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.273893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.274022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.274055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.274225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.274255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.274371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.274402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.274607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.624 [2024-11-20 15:36:21.274638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.624 qpair failed and we were unable to recover it. 00:27:17.624 [2024-11-20 15:36:21.274823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.274853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.274981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.275013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.275230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.275262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.275390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.275420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.275606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.275635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.275825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.275854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.275981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.276013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.276252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.276284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.276392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.276424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.276685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.276714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.276847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.276878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.277053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.277252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.277456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.277487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.277721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.277752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.277922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.277958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.278158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.278189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.278407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.278598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.278628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.278735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.278765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.279007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.279039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.279228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.279261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.279446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.279477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.279729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.279759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.279889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.279919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.280184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.280216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.280411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.280442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.280613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.280645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.280747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.280777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.281063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.281189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.281220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.281339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.281370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.281509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.281743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.281774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.282011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.282049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.282263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.282293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.282467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.282499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.282681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.282712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.282895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.282925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.283142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.283173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.283442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.283574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.283604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.283822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.283991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.284023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.284138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.284169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.284417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.625 [2024-11-20 15:36:21.284447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.625 qpair failed and we were unable to recover it. 00:27:17.625 [2024-11-20 15:36:21.284616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.284648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.284829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.284860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.285048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.285080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.285272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.285302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.285479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.285509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.285679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.285710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.285921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.286140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.286172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.286298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.286329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.286455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.286486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.286656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.286689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.286928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.286967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.287099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.287131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.287369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.287400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.287516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.287548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.287795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.287867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.288104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.288142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.288398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.288431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.288606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.288637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.288772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.288803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.288977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.289009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.289190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.289221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.289403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.289432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.289621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.289652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.289867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.289898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.290109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.290141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.290350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.290382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.290509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.290539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.290710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.290750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.291013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.291047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.291289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.291322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.291556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.291854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.291885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.292014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.292046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.292309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.292653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.292684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.292887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.292918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.293148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.293331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.293362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.293542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.293572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.293832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.293864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.294053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.294086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.294274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.294304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.294413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.294445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.294704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.294736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.294847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.294880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.295106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.295266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.295400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.295562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.626 [2024-11-20 15:36:21.295769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.626 qpair failed and we were unable to recover it. 00:27:17.626 [2024-11-20 15:36:21.295883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.295913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.296130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.296320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.296351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.296467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.296506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.296610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.296642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.296826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.296856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.297942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.297986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.298201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.298233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.298405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.298437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.298624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.298654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.298879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.299090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.299122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.299248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.299279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.299516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.299545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.299727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.299759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.299928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.299978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.300087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.300118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.300255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.300288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.300546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.300743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.300956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.300989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.301234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.301266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.301447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.301478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.301599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.301631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.301820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.301851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.302105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.302139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.302388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.302419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.302560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.302591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.302708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.302739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.302874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.302906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.303098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.303130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.303255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.303467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.303497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.303734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.303764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.304008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.304041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.304239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.304390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.304419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.304588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.304619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.304836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.305026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.305058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.305242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.305273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.305484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.305605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.305637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.305879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.305909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.306108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.306330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.306361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.627 [2024-11-20 15:36:21.306541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.627 [2024-11-20 15:36:21.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.627 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.306844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.306874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.307122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.307154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.307338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.307370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.307546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.307576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.307727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.307757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.307969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.308002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.308201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.308232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.308495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.308526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.308713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.308743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.308981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.309014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.309225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.309254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.309422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.309453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.309731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.309761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.310004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.310222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.310252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.310354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.310385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.310668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.310698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.310907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.310938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.311079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.311112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.311301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.311332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.311500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.311530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.311650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.311681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.311861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.311892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.312036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.312068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.312274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.312466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.312496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.312693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.312723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.312911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.312941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.313169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.313325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.313476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.313618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.313839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.313969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.314002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.314240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.314271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.314452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.314482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.314725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.314756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.314932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.314970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.315193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.315341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.315495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.315697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.315848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.315975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.316007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.316196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.316227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.316422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.316454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.316645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.316675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.316851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.316881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.317052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.317200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.317231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.317425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.317455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.317725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.628 [2024-11-20 15:36:21.317755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.628 qpair failed and we were unable to recover it. 00:27:17.628 [2024-11-20 15:36:21.317877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.317908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.318017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.318049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.318240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.318270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.318383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.318594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.318626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.318892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.319149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.319180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.319372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.319403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.319671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.319701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.319873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.320104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.320136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.320309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.320339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.320519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.320550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.320761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.320791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.320926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.320968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.321085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.321116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.321300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.321331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.321555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.321586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.321700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.321731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.321917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.321967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.322090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.322121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.322356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.322387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.322518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.322722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.322929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.322971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.323151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.323182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.323365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.323397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.323578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.323608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.323785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.323817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.323924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.323976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.324172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.324202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.324328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.324359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.324529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.324559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.324763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.324794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.324921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.324962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.325133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.325165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.325350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.325381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.325595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.325625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.325837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.325868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.325996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.326211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.326353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.326558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.326710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.326855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.326886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.327098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.327131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.327325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.629 [2024-11-20 15:36:21.327354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.629 qpair failed and we were unable to recover it. 00:27:17.629 [2024-11-20 15:36:21.327561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.327592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.327700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.327730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.327864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.327894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.328896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.328926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.329135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.329167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.329290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.329322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.329496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.329526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.329732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.329901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.329932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.330141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.330174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.330299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.330330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.330591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.330622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.330729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.330760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.330891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.330922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.331079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.331313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.331467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.331669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.331870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.331986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.332020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.332200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.332231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.332443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.332474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.332736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.332767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.332962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.332995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.333239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.333269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.333388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.333419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.333590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.333621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.333817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.333848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.334101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.334133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.334304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.334335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.334542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.334725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.334756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.334928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.335207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.335238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.335362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.335393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.335570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.335600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.335867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.335898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.336074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.336106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.336217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.336247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.336355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.336386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.336561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.336592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.336793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.336824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.337001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.337034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.337222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.337253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.337384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.337414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.337620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.337651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.630 [2024-11-20 15:36:21.337770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.630 [2024-11-20 15:36:21.337800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.630 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.337989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.338027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.338137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.338168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.338455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.338486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.338661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.338933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.338975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.339089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.339294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.339324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.339578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.339609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.339850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.339880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.340078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.340110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.340248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.340280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.340444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.340610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.340641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.340833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.340865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.341089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.341122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.341239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.341269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.341458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.341490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.341607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.341638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.341830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.341861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.342033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.342065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.342201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.342233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.342445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.342487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.342696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.342727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.342864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.342896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.343150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.343306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.343339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.343465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.343497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.343678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.343708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.343903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.343934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.344944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.344986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.345088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.345119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.345307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.345338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.345544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.345575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.345682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.345712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.345913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.345957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.346197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.346351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.346572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.346721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.346863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.346978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.347010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.347247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.347277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.347445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.347477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.347673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.347884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.347915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.348044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.348078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.631 qpair failed and we were unable to recover it. 00:27:17.631 [2024-11-20 15:36:21.348186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.631 [2024-11-20 15:36:21.348216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.348325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.348359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.348563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.348681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.348715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.348834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.348866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.349425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.349615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.349646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.349775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.349805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.349920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.349974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.350150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.350182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.350307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.350338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.350523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.350553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.350735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.350766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.350878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.350910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.351924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.351964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.352153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.352184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.352290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.352323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.352616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.352646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.352751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.352784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.352913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.352945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.353149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.353181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.353321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.353351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.353528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.353559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.353762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.353879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.353910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.354146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.354179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.354312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.354343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.354527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.354558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.354676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.354708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.354971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.355178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.355324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.355599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.355749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.355893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.355924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.356056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.356087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.356274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.356305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.356483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.356514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.356685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.356715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.356975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.357008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.357185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.357214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.632 [2024-11-20 15:36:21.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.632 [2024-11-20 15:36:21.357359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.632 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.357499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.357531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.357740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.357770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.357936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.357988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.358252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.358284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.358391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.358420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.358549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.358580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.358756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.358789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.359059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.359090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.359336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.359409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.359651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.359687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.359806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.359840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.359966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.360124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.360496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.360665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.360874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.360905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.361038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.361070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.361255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.361288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.361462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.361493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.361717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.361749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.361857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.361893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.362080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.362305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.362470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.362682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.362843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.362968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.363002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.363194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.363225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.363351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.363592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.363790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.363822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.364044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.364078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.364207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.364240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.364437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.364468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.364736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.364772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.364973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.365006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.365196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.365228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.365439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.365470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.365598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.365629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.365801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.365832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.366053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.366202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.366378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.366579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.366733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.366974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.367257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.367461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.367607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.367760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.367914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.367945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.368157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.633 [2024-11-20 15:36:21.368188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.633 qpair failed and we were unable to recover it. 00:27:17.633 [2024-11-20 15:36:21.368318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.368351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.368466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.368606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.368760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.368918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.368956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.369159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.369190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.369304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.369335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.369554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.369585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.369762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.369794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.369944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.370107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.370259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.370408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.370540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.370815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.370849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.371944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.371986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.372170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.372201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.372379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.372415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.372530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.372562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.372667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.372698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.372881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.372912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.373352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.373502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.373644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.373851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.373968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.374241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.374272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.374392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.374645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.374674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.374858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.374889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.375034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.375066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.375186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.375216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.375349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.375381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.375502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.375532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.375846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.376959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.377160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.377305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.377530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.377680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.634 [2024-11-20 15:36:21.377815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.634 qpair failed and we were unable to recover it. 00:27:17.634 [2024-11-20 15:36:21.377988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.378133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.378393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.378596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.378744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.378882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.378914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.379047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.379078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.379321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.379351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.379462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.379493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.379644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.379676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.379794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.379829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.380886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.380917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.381099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.381132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.381321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.381351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.381530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.381561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.381724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.381755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.381860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.381891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.382121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.382153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.382323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.382354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.382531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.382563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.382742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.382773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.382936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.383193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.383225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.383359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.383559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.383589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.383772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.383802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.383926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.384178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.384313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.384473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.384683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.384810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.384988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.385196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.385490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.385694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.385840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.385871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.386005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.386037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.386237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.386268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.386448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.386478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.386659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.386689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.635 qpair failed and we were unable to recover it. 00:27:17.635 [2024-11-20 15:36:21.386860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.635 [2024-11-20 15:36:21.386890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.387007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.387039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.387327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.387357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.387476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.387513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.387691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.387722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.387837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.387868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.388090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.388123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.388280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.388541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.388572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.388691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.388723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.388836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.388867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.389901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.389932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.390936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.390977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.391153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.391184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.391424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.391454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.391562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.391594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.391782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.391814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.391925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.391963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.392875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.392905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.393122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.393330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.393534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.393682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.393833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.393980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.394142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.394344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.394549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.394748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.394889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.394921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.395046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.395078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.395205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.395237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.395405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.395612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.395642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.395827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.395857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.396047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.396080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.396207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.636 [2024-11-20 15:36:21.396242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.636 qpair failed and we were unable to recover it. 00:27:17.636 [2024-11-20 15:36:21.396406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.396437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.396678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.396709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.396831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.396994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.397138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.397312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.397604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.397743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.397911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.397943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.398109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.398140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.398264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.398295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.398508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.398538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.398667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.398697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.398876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.398906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.399867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.399898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.400963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.401189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.401220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.401399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.401430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.401532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.401563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.401673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.401705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.401894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.401924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.402035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.402072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.402190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.402222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.402402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.402432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.402559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.402589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.402776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.403941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.403980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.404092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.404122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.404250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.404518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.404549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.404736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.404768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.404889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.404918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.405049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.405079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.405210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.405241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.405345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.405374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.405494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.405524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.637 [2024-11-20 15:36:21.405639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.637 [2024-11-20 15:36:21.405670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.637 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.405850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.405881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.406057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.406089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.406259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.406291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.406463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.406494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.406662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.406692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.406872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.406904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.407941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.408085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.408116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.408312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.408343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.408453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.408483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.408584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.408615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.408784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.408814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.409039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.409194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.409410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.409611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.409759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.409975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.410874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.410984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.411137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.411307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.411517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.411912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.411944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.412902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.412932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.413066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.413098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.413213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.413243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.413386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.413416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.413612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.413642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.413873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.413903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.414919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.414960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.415144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.415175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.638 [2024-11-20 15:36:21.415290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.638 [2024-11-20 15:36:21.415321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.638 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.415430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.415460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.415565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.415596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.415764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.415794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.415973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.416933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.416974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.417158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.417188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.417307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.417337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.417526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.417556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.417759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.417789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.417935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.418878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.418910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.419896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.419927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.420825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.420855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.421114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.421369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.421567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.421707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.421850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.421976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.422193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.422488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.422622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.422787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.422983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.423092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.423122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.423310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.423340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.423449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.639 [2024-11-20 15:36:21.423480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.639 qpair failed and we were unable to recover it. 00:27:17.639 [2024-11-20 15:36:21.423621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.423653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.423767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.423797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.423906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.423939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.424077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.424109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.424377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.426140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.426197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.426444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.426685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.426717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.426896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.426929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.427150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.427181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.427306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.427544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.427575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.427760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.427790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.427930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.427973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.428918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.428983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.429105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.429137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.429332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.429363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.429605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.429637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.429767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.429799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.429907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.429938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.430119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.430333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.430502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.430655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.430814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.430984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.431870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.431900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.432071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.432216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.432414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.432571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.432731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.432975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.433112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.433268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.433478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.433706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.433845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.433875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.434058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.434090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.434209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.434240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.434354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.434384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.434570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.434601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.640 qpair failed and we were unable to recover it. 00:27:17.640 [2024-11-20 15:36:21.434724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.640 [2024-11-20 15:36:21.434755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.434995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.435193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.435230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.435349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.435381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.435565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.435597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.435719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.435749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.435864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.435894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.436119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.436266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.436655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.436994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.437147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.437281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.437510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.437654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.437862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.437891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.438971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.439076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.439106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.439224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.439255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.439426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.439457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.439592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.439622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.439843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.439873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.440865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.440896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.441162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.441195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.441304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.441335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.441443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.441479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.441649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.441679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.441918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.441962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.442115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.442330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.442492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.442699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.442836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.442969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.641 [2024-11-20 15:36:21.443945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.641 [2024-11-20 15:36:21.443987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.641 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.444216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.444247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.444362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.444394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.444581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.444611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.444779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.444808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.444935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.444978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.445847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.445997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.446958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.446991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.447236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.447267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.447376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.447407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.447573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.447604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.447775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.447806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.447933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.447971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.448120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.448320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.448460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.448626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.448831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.448965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.449172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.449386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.449536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.449682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.449835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.449865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.450051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.450084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.450279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.450312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.450481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.450512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.450685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.450717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.450907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.450937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.451801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.451832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.452005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.642 [2024-11-20 15:36:21.452038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.642 qpair failed and we were unable to recover it. 00:27:17.642 [2024-11-20 15:36:21.452322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.452481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.452512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.452689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.452720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.452822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.452852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.452962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.452992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.453105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.453134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.453310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.453342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.453453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.453482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.453806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.453837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.454868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.454973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.455114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.455268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.455559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.455760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.455965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.455997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.456118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.456155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.456338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.456368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.456629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.456762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.456791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.456985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.457332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.457761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.457897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.457927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.458067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.458218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.458250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.458441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.458472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.458586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.458615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.458746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.458778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.459017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.459050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.459242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.459467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.459498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.459698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.459728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.459838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.459869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.460114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.460146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.460263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.460294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.460509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.460708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.460739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.460924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.460963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.461168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.461326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.461500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.461648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.643 [2024-11-20 15:36:21.461858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.643 qpair failed and we were unable to recover it. 00:27:17.643 [2024-11-20 15:36:21.461965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.461997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.462245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.462276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.462461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.462491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.462605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.462855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.463888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.463924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.464191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.464223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.464352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.464383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.464494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.464525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.464713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.464744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.464861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.464891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.465932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.465972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.466104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.466135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.466312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.466341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.466467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.466499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.466622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.466651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.466835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.466866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.467061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.467094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.467330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.467361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.467557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.467589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.467775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.467921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.467970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.468164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.468440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.468470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.468669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.468699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.468818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.468850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.468970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.469001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.469180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.469210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.469388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.469419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.469564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.469597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.469912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.470925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.470964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.471087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.471118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.471316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.471347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.471541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.471572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.471787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.471830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.472010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.644 [2024-11-20 15:36:21.472042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.644 qpair failed and we were unable to recover it. 00:27:17.644 [2024-11-20 15:36:21.472225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.472258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.472430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.472461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.472640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.472670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.472886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.472918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.473117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.473149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.473391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.473421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.473547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.473577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.473841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.473873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.473998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.474030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.474286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.474317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.474508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.474540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.474717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.474747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.474866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.474897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.475061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.475301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.475507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.475650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.475796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.475982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.476348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.476510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.476727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.476886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.476916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.477116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.477148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.477272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.477302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.477476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.477697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.477729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.477904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.477935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.478961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.478992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.479170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.479201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.479335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.479366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.479491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.479523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.479699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.479734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.479837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.479868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.480874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.480903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.481131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.481322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.645 [2024-11-20 15:36:21.481352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.645 qpair failed and we were unable to recover it. 00:27:17.645 [2024-11-20 15:36:21.481459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.481490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.481667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.481697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.481834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.482206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.482418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.482634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.482810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.482995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.483028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.483139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.483170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.483287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.483557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.483589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.483817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.483849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.484923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.484966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.485144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.485175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.485304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.485337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.485446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.485477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.485654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.485686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.485905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.486046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.486079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.486207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.486238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.486359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.486390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.486511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.486542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.486737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.486767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.487029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.487060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.487270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.487308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.487491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.487522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.487646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.487677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.487791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.488013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.488047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.488183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.488215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.488326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.646 [2024-11-20 15:36:21.488357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.646 qpair failed and we were unable to recover it. 00:27:17.646 [2024-11-20 15:36:21.488471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.488501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.488609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.488640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.488761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.488791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.488914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.488946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.489157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.489188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.489309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.489339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.489514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.489546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.489653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.489684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.489868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.489898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.490055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.490209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.490487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.490626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.490852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.490981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.491185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.491335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.491552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.491707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.491872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.491904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.492899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.492930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.493122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.493153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.493283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.493313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.493519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.493550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.493736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.493766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.493969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.935 [2024-11-20 15:36:21.494000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.935 qpair failed and we were unable to recover it. 00:27:17.935 [2024-11-20 15:36:21.494119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.494149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.494431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.494463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.494594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.494629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.494747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.494958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.494989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.495177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.495209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.495367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.495534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.495564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.495700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.495907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.495939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.496069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.496101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.496226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.496256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.496368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.496399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.496667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.496697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.496810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.496841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.497908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.497939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.498124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.498154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.498391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.498424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.498661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.498692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.498830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.498861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.499038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.499071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.499202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.499233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.499337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.499366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.499590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.499622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.499821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.499854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.500121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.500153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.500339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.500369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.500553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.500584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.500695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.500724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.500835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.500863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.501814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.501845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.502916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.502968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.503100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.503131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.503262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.503291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.503553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.503583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.503715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.503745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.503856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.503886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.504858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.504890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.505091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.505336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.505367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.505572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.505601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.505738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.505769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.936 qpair failed and we were unable to recover it. 00:27:17.936 [2024-11-20 15:36:21.505879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.936 [2024-11-20 15:36:21.505910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.506037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.506068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.506199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.506230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.506356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.506386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.506601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.506631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.506870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.506899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.507840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.507870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.508136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.508287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.508437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.508711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.508852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.508968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.509839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.509940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.510120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.510389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.510548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.510819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.510850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.511910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.511941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.512120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.512262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.512291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.512534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.512564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.512769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.512799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.512980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.513145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.513534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.513684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.513832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.513862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.514101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.514133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.514261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.514292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.514470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.514502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.514676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.514707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.514811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.514840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.515077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.515110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.515308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.515339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.515535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.515567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.515808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.515839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.516022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.516054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.516414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.516446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.516569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.516599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.516881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.516913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.517122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.517155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.517288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.517444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.517475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.517696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.937 qpair failed and we were unable to recover it. 00:27:17.937 [2024-11-20 15:36:21.517810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.937 [2024-11-20 15:36:21.517841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.517998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.518032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.518272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.518304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.518408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.518437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.518609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.518640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.518838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.518869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.519885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.519916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.520914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.520945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.521070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.521102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.521282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.521312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.521442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.521572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.521601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.521780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.521810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.522020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.522365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.522437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.522695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.522731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.522941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.522994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.523167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.523198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.523369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.523400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.523639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.523670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.523785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.523814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.523944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.523988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.524121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.524152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.524341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.524372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.524496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.524529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.524741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.524875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.524907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.525180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.525212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.525347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.525379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.525555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.525586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.525708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.525737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.525859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.525890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.526024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.526055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.526168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.526197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.526440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.526469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.526579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.526609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.526849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.526880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.527962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.527995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.528112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.528141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.938 [2024-11-20 15:36:21.528377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.938 [2024-11-20 15:36:21.528408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.938 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.528583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.528615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.528731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.528762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.528935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.528975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.529083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.529114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.529292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.529323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.529557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.529587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.529709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.529741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.529861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.529890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.530082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.530112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.530296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.530600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.530630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.530804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.530835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.530972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.531005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.531182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.531213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.531317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.531348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.531537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.531567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.531755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.531787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.531968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.532001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.532199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.532230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.532416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.532447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.532660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.532788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.532819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.533041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.533279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.533315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.533564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.533595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.533700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.533730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.533836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.533866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.534922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.534965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.535162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.535193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.535410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.535441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.535570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.535600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.535705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.535735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.535963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.535997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.536104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.536133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.536334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.536364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.536485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.536514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.536706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.536778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.536990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.537163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.537319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.537540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.537693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.537847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.537878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.538121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.538156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.538395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.538428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.538557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.538597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.538715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.538746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.538940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.538987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.539167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.539199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.539386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.539418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.539586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.539618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.539754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.539927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.539969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.540104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.540134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.540251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.540281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.540535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.540566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.540857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.541039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.541072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.939 qpair failed and we were unable to recover it. 00:27:17.939 [2024-11-20 15:36:21.541258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.939 [2024-11-20 15:36:21.541291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.541427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.541460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.541699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.541731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.541978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.542012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.542215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.542247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.542359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.542389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.542545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.542782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.542813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.543069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.543101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.543218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.543250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.543418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.543450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.543608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.543776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.543807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.544046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.544078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.544218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.544255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.544449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.544480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.544653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.544684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.544900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.544932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.545149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.545181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.545442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.545473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.545663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.545694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.545911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.546091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.546124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.546254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.546285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.546522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.546552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.546735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.546765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.546890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.546922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.547127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.547159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.547284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.547315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.547503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.547534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.547795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.547825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.548063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.548095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.548222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.548253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.548443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.548473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.548785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.548817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.548938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.548980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.549169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.549200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.549484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.549514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.549705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.549735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.549935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.549976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.550207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.550441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.550794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.550825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.551064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.551094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.551273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.551304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.551480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.551511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.551681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.551712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.551932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.552062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.552093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.552271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.552301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.552485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.552515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.552695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.552726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.552896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.552925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.553131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.553162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.553339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.553370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.553498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.553535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.553718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.553748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.553858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.553894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.554024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.554057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.554179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.554210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.554394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.554425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.554594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.554625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.554801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.554832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.940 qpair failed and we were unable to recover it. 00:27:17.940 [2024-11-20 15:36:21.555004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.940 [2024-11-20 15:36:21.555037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.555236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.555271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.555457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.555670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.555837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.555875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.556066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.556098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.556308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.556339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.556443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.556474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.556591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.556622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.556810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.556846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.557055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.557090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.557292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.557323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.557508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.557538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.557658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.557689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.557813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.557845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.558104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.558137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.558255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.558285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.558534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.558569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.558757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.558792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.558984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.559865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.559997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.560032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.560280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.560315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.560544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.560575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.560771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.560802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.561039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.561070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.561363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.561394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.561566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.561603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.561819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.561855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.562047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.562082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.562198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.562229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.562412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.562443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.562617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.562649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.562859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.562891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.563069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.563280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.563427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.563574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.563784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.563975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.564187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.564419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.564621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.564765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.564927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.564966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.565081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.565112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.565294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.565494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.565526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.565743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.565928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.565969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.566151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.566399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.566522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.566670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.566703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.566887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.566920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.941 [2024-11-20 15:36:21.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.941 [2024-11-20 15:36:21.567139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.941 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.567396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.567428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.567536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.567567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.567685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.567716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.567921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.568150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.568305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.568337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.568575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.568608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.568801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.568831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.569013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.569047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.569166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.569197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.569385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.569416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.569674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.569885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.569917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.570058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.570090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.570219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.570250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.570440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.570472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.570674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.570705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.570828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.570859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.571305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.571504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.571672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.571817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.571967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.572189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.572518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.572730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.572938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.572984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.573164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.573366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.573397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.573570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.573601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.573734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.573765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.573899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.573931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.574164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.574305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.574628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.574858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.574994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.575167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.575392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.575547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.575770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.575921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.575957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.576093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.576320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.576478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.576655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.576808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.576999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.577783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.578124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.578331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.578488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.578620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.578828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.578859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.579025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.579058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.942 [2024-11-20 15:36:21.579240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.942 [2024-11-20 15:36:21.579272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.942 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.579447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.579479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.579669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.579700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.579874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.579911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.580126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.580157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.580271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.580302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.580569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.580690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.580721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.580847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.580878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.581798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.581829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.582066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.582240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.582271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.582449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.582480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.582603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.582634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.582828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.582859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.583886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.583917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.584039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.584072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.584195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.584227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.584404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.584436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.584606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.584639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.584759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.584795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.585035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.585068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.585252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.585284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.585401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.585433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.585562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.585592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.585779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.585812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.586960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.587176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.587316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.587487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.587640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.587783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.587968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.588175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.588386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.588552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.588710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.588920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.588960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.589079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.589111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.589333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.589466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.589498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.589670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.589701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.589937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.590163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.590195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.590309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.943 [2024-11-20 15:36:21.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.943 qpair failed and we were unable to recover it. 00:27:17.943 [2024-11-20 15:36:21.590455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.590486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.590726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.590758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.590870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.590902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.591158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.591191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.591311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.591343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.591450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.591481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.591606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.591637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.591808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.591840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.592050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.592082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.592282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.592314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.592501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.592533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.592718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.592754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.592876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.592908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.593915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.593946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.594150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.594181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.594347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.594378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.594612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.594643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.594855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.594886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.594994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.595026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.595133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.595407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.595438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.595619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.595651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.595830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.595861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.595982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.596119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.596273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.596467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.596963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.597105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.597136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.597347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.597380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.597604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.597845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.598086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.598253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.598468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.598690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.598838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.598969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.599192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.599349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.599495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.599734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.599877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.599908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.600930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.600971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.601125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.601505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.601724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.601873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.601991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.602034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.602220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.602250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.602364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.602394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.602516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.944 [2024-11-20 15:36:21.602548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.944 qpair failed and we were unable to recover it. 00:27:17.944 [2024-11-20 15:36:21.602732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.602764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.602876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.602913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.603207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.603240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.603355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.603385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.603509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.603539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.603652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.603684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.603858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.603889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.604159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.604191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.604417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.604448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.604638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.604669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.604885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.604916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.605155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.605411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.605480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.605727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.605764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.605874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.605906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.606802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.606834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.607871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.607902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.608871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.608986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.609140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.609675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.609830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.609860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.610846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.610877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.611869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.611899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.612078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.612111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.612214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.612243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.612357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.612387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.612579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.612610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.612747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.612777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.613123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.613269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.613484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.613637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.613847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.613981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.614012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.614196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.945 [2024-11-20 15:36:21.614227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.945 qpair failed and we were unable to recover it. 00:27:17.945 [2024-11-20 15:36:21.614407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.614438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.614542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.614572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.614691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.614722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.614892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.614923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.615867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.615977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.616181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.616323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.616459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.616664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.616870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.616899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.618961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.618994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.619227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.619369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.619516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.619648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.619883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.619990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.620852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.620973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.621833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.622096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.622217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.622248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.622421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.622453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.622626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.622657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.622829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.622860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.623037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.946 [2024-11-20 15:36:21.623071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.946 qpair failed and we were unable to recover it. 00:27:17.946 [2024-11-20 15:36:21.623189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.623221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.623387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.623418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.623587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.623618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.623794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.623825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.625946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.626125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.626157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.626281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.626313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.626485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.626517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.626688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.626720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.626864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.626896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.627938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.627980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.628097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.628129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.628232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.628263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.628436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.628468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.628645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.628677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.628855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.628886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.629104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.629386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.629587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.629719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.629870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.629973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.630005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.630118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.630149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.630265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.630297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.630465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.630496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.630682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.630714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.630987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.631880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.631993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.632025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.632215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.632247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.632413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.632444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.632625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.632655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.632803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.632834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.633862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.633893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.634028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.634060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.634224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.634293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.947 [2024-11-20 15:36:21.634493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.947 [2024-11-20 15:36:21.634528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.947 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.634656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.634687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.634810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.634840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.634939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.634985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.635158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.635190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.635304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.635335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.635541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.635573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.635704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.635734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.635969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.636090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.636121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.636294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.636326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.636467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.636499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.636676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.636717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.636889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.636920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.637108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.637138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.637317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.637347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.637476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.637506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.637635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.637665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.637841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.637872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.638146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.638382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.638515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.638666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.638799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.638985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.639017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.639217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.639249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.639389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.639420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.639542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.639572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.639790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.639978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.640191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.640347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.640702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.640929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.640972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.641096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.641127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.641253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.641283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.641455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.641486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.641695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.641725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.641864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.641907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.642150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.642399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.642432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.642555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.642710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.642741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.642858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.642890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.643101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.643133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.643251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.643283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.643399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.643428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.643617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.643648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.643885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.643916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.644998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.645925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.645968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.646121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.646152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.646263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.646294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.948 [2024-11-20 15:36:21.646490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.948 [2024-11-20 15:36:21.646522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.948 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.646641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.646672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.646912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.647927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.647970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.648076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.648235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.648266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.648541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.648572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.648764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.648794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.648971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.649004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.649198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.649230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.649480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.649520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.649645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.649676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.649881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.649914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.650906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.650938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.651081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.651113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.651288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.651319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.651589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.651620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.651732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.651764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.651891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.651922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.652152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.652294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.652503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.652710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.652853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.652970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.653002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.653217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.653534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.653566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.653736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.653768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.653971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.654177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.654321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.654526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.654684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.654815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.654848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.655094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.655302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.655334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.655456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.655489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.655606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.655637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.655775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.655806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.656038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.656189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.656397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.656618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.656786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.656976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.657010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.657117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.657149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.657282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.657314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.657427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.949 [2024-11-20 15:36:21.657458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.949 qpair failed and we were unable to recover it. 00:27:17.949 [2024-11-20 15:36:21.657633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.657781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.657813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.657932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.657973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.658107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.658138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.658342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.658374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.658598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.658743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.658774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.658967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.659957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.659989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.660133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.660500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.660645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.660786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.660983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.661878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.661987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.662123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.662256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.662526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.662756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.662898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.662929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.663092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.663270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.663544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.663575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.663841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.663872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.664049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.664081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.664325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.664356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.664595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.664633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.664736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.664767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.664883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.664915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.665038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.665071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.665187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.665217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.665341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.665372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.665664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.665833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.666100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.666133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.666250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.666282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.666506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.666760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.666790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.667010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.667042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.667164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.667195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.667453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.667524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.667724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.667758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.667970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.668199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.668337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.668562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.668730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.668936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.668982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.669100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.669131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.669238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.669269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.669458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.669489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.669659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.669690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.669924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.950 [2024-11-20 15:36:21.669966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.950 qpair failed and we were unable to recover it. 00:27:17.950 [2024-11-20 15:36:21.670156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.670198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.670436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.670467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.670588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.670619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.670807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.670838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.670943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.670986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.671161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.671192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.671309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.671340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.671465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.671495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.671617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.671647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.671856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.671887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.672926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.672964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.673152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.673183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.673313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.673342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.673466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.673496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.673602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.673632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.673925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.673967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.674166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.674196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.674321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.674353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.674614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.674644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.674830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.674861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.674989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.675021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.675248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.675419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.675491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.675649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.675683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.675872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.675904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.676932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.676978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.677111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.677142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.677258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.677288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.677459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.677490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.677681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.677712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.677838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.677880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.678056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.678222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.678432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.678635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.678872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.678983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.679927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.680912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.680943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.681194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.681226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.681397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.681427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.681541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.681571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.681696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.681727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.951 qpair failed and we were unable to recover it. 00:27:17.951 [2024-11-20 15:36:21.681836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.951 [2024-11-20 15:36:21.681866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.681981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.682133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.682342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.682525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.682718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.682931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.682977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.683156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.683186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.683372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.683403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.683584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.683614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.683727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.683757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.683876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.684865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.684901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.685074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.685301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.685519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.685662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.685806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.685983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.686131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.686342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.686549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.686746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.686962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.686994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.687133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.687286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.687439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.687678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.687818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.687994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.688799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.688985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.689016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.689193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.689223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.689394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.689424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.689606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.689637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.689859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.689901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.690075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.690147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.690364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.690402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.690584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.690617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.690741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.690771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.690877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.690908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.691958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.691991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.692200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.692232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.692357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.692388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.692588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.952 [2024-11-20 15:36:21.692620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.952 qpair failed and we were unable to recover it. 00:27:17.952 [2024-11-20 15:36:21.692803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.692834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.692979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.693852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.693883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.694094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.694233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.694438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.694640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.694836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.694967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.695171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.695377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.695586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.695720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.695873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.695904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.696912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.696942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.697216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.697253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.697430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.697461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.697637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.697668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.697787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.697818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.698966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.698998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.699131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.699162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.699285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.699436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.699467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.699662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.699693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.699821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.699852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.700046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.700078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.700249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.700280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.700409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.700441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.700615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.700646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.700854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.700885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.701912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.701944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.702126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.702157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.702290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.702321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.702424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.702455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.702620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.702651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.702807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.703067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.703099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.703223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.703255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.703466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.703702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.703734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.703915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.703968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.704145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.704363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.704395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.953 qpair failed and we were unable to recover it. 00:27:17.953 [2024-11-20 15:36:21.704583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.953 [2024-11-20 15:36:21.704614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.704775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.704874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.704910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.705890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.705921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.706088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.706159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.706314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.706351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.706481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.706511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.706632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.706664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.706841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.706871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.707005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.707038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.707161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.707192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.707392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.707423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.707610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.707643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.707847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.707877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.708860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.708891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.709917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.709958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.710131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.710161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.710345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.710375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.710563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.710594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.710708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.710865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.710895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.711962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.711995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.712912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.712942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.713237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.713372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.713403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.713640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.713670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.713797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.713827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.714043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.714247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.714402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.714540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.714837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.714945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.715072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.715249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.715279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.954 qpair failed and we were unable to recover it. 00:27:17.954 [2024-11-20 15:36:21.715399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.954 [2024-11-20 15:36:21.715431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.715636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.715666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.715785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.715816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.715928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.715967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.716102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.716301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.716514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.716659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.716872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.716985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.717017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.717219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.717249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.717444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.717475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.717590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.717620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.717804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.717834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.718867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.719820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.719849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.720038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.720070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.720274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.720305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.720429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.720461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.720645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.720779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.720810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.721902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.721933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.722125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.722156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.722279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.722309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.722549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.722580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.722814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.722844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.723891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.723922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.724171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.724202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.724328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.724359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.724487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.724518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.724654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.724684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.724859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.724889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.725087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.725119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.725231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.725262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.725459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.725490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.725606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.725636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.725894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.725924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.726925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.726967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.727142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.727178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.955 qpair failed and we were unable to recover it. 00:27:17.955 [2024-11-20 15:36:21.727314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.955 [2024-11-20 15:36:21.727345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.727518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.727548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.727819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.727848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.728073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.728215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.728383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.728588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.728854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.728973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.729123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.729262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.729498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.729714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.729930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.729991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.730171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.730202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.730463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.730492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.730666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.730696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.730888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.730918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.731116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.731147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.731331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.731361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.731534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.731564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.731789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.731926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.732868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.732898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.733906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.733936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.734964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.734997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.735184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.735214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.735399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.735429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.735657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.735764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.735793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.735896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.735926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.736065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.736096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.736219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.736250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.736419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.736450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.736692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.736723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.736848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.736877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.737047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.737079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.737256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.737287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.737465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.737495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.737679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.737710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.737832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.737863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.738105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.738255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.738286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.738471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.738669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.738699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.738801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.738831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.739018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.739049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.739181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.739213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.739320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.956 [2024-11-20 15:36:21.739350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.956 qpair failed and we were unable to recover it. 00:27:17.956 [2024-11-20 15:36:21.739459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.739489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.739668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.739697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.739901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.739932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.740151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.740182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.740300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.740331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.740453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.740484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.740669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.740699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.740876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.740906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.741094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.741126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.741243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.741272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.741451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.741481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.741685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.741804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.741834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.742019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.742235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.742267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.742372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.742409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.742516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.742783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.742824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.743796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.743826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.744863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.744893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.745934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.745974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.746149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.746179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.746384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.746416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.746625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.746656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.746827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.746858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.747040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.747072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.747192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.747222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.747449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.747520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.747741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.747777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.747905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.747936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.748934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.748978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.749118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.749148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.749418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.749588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.749618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.749742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.749772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.749883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.749922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.750058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.750091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.750206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.750236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.750353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.957 [2024-11-20 15:36:21.750385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.957 qpair failed and we were unable to recover it. 00:27:17.957 [2024-11-20 15:36:21.750554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.750586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.750766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.750798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.750945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.751191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.751491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.751701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.751969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.752001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.752109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.752253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.752283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.752532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.752564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.752802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.752833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.753864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.753894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.754119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.754151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.754271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.754301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.754414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.754445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.754619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.754649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.754832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.754903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.755058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.755096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.755365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.755396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.755569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.755601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.755775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.755806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.755978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.756124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.756268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.756419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.756623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.756855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.756886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.757084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.757116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.757351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.757382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.757490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.757522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.757709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.757740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.757917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.757960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.758131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.758163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.758298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.758477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.758508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.758701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.758733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.758860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.758891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.759844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.759875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.760044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.760082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.760261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.760292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.760502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.760664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.760696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.760866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.760896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.761076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.761108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.761284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.761314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.761421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.761453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.761715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.761747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.761871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.762103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.762135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.762242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.762274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.762558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.762589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.762712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.762742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.762850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.958 [2024-11-20 15:36:21.762881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.958 qpair failed and we were unable to recover it. 00:27:17.958 [2024-11-20 15:36:21.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.763379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.763580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.763719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.763920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.764087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.764118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.764372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.764403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.764577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.764608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.764782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.764813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.764995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.765883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.765999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.766030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.766154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.766186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.766367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.766396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.766601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.766632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.766825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.766856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.767000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.767191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.767223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.767409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.767440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.767679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.767710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.767859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.767890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.768926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.768970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.769215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.769245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.769357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.769388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.769556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.769588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.769779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.769809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.769995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.770027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.770207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.770239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.770409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.770440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.770570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.770608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.770780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.770811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.771896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.771927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.772052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.772084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.772200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.772231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.772500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.772531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.772651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.772682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.772801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.772831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.773921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.773962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.774205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.774235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.774518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.774550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.774653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.774686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.774865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.774897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.775083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.775116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.775230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.775261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.959 qpair failed and we were unable to recover it. 00:27:17.959 [2024-11-20 15:36:21.775432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.959 [2024-11-20 15:36:21.775463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.775579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.775610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.775777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.775926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.775987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.776959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.776992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.777162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.777193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.777454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.777485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.777661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.777692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.777804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.777835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.778024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.778057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.778162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.778193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.778457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.778528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.778672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.778707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.778975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.779156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.779379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.779526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.779733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.779876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.779906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.780028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.780061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.780253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.780284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.780452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.780483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.780666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.780697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.780827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.780858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.781963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.781996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.782113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.782144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.782269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.782299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.782481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.782513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.782702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.782732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.782903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.782934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.783065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.783096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.783212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.783444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.783475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.783592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.783622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.783813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.783843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.784019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.784050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.784221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.784251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.784457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.784488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.784674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.784705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.784897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.784929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.785108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.785140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.785327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.785359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.785478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.785508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.785769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.785801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.785970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.786002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.786244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.786276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.786406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.786565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.786596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.960 [2024-11-20 15:36:21.786833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.960 [2024-11-20 15:36:21.786865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.960 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.787037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.787069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.787260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.787290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.787553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.787583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.787696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.787728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.787832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.787863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.788037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.788069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.788256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.788287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.788552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.788582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.788766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.788796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.788983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.789021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.789199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.789230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.789494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.789525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.789647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.789678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.789848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.789879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.790139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.790173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.790298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.790329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.790498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.790528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.790717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.790748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.790936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.790978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.791219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.791250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.791468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.791593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.791624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.791807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.791837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.792823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.792993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.793283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.793431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.793568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.793784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.793930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.793974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.794155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.794186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.794300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.794331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.794515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.794546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.794757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.794787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.794915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.794946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.795134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.795165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.795440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.795561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.795592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.795763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.795793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.795968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.796107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.796528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.796747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.796926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.797064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.797095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.797354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.797385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.797517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.797548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.797763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.798219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.798435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.798982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.799014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.799276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.799306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.799486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.799516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.799688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.961 [2024-11-20 15:36:21.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.961 qpair failed and we were unable to recover it. 00:27:17.961 [2024-11-20 15:36:21.799908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.799939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.800131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.800162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.800401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.800432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.800699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.800730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.800910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.800940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.801056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.801088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.801272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.801302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.801557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.801587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.801826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.801857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.802036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.802067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.802245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.802275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.802513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.802542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.802648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.802678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.802881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.802923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.803056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.803087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.803344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.803375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.803566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.803595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.803832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.803862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.804045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.804075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.804188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.804218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.804456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.804488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.804745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.804775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.804984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.805016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.805133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.805164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.805336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.805367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.805468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.805499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.805736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.805767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.805968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.806001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.806260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.806290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.806494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.806525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.806703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.806982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.807014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.807194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.807224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.807351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.807381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.807481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.807512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.807749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.807779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.807996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.808028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.808210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.808240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.808527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.808665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.808695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.808818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.808849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.809038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.809069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.809336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.809367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.809609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.809640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:17.962 qpair failed and we were unable to recover it. 00:27:17.962 [2024-11-20 15:36:21.809753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.962 [2024-11-20 15:36:21.809783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.809899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.809928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.810107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.810140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.810332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.810362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.810554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.810583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.810764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.810793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.811010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.811041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.811343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.811600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.811631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.811768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.811805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.812042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.249 [2024-11-20 15:36:21.812074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.249 qpair failed and we were unable to recover it. 00:27:18.249 [2024-11-20 15:36:21.812354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.812384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.812653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.812684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.812872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.812903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.813164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.813196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.813382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.813651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.813681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.813799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.813829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.813967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.813998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.814178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.814208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.814339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.814370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.814561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.814591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.814765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.814795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.814990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.815022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.815259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.815290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.815530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.815804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.815834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.816157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.816417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.816449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.816711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.816741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.816932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.816970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.817160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.817423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.817685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.817716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.817864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.817996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.818027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.818210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.818241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.818481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.818512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.818711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.818742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.818931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.818985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.819102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.819132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.819369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.819399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.819582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.819614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.819851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.819881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.820067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.820098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.820302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.820333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.820514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.820735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.820766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.820880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.820909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.821173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.821210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.821327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.821358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.821549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.821579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.821770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.821801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.821986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.822149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.822178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.822289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.822320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.822521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.822552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.822742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.822773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.823049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.823282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.823436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.823584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.823786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.823977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.824189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.824602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.824745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.824962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.824993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.825230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.250 [2024-11-20 15:36:21.825261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.250 qpair failed and we were unable to recover it. 00:27:18.250 [2024-11-20 15:36:21.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.825482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.825589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.825618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.825745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.825775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.825971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.826002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.826212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.826243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.826419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.826451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.826573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.826605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.826793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.826823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.826991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.827022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.827208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.827239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.827360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.827391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.827560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.827589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.827775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.827806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.828046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.828079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.828203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.828233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.828410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.828440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.828610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.828816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.828846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.829084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.829117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.829298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.829335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.829521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.829551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.829662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.829874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.829904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.830059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.830094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.830264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.830296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.830423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.830453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.830711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.830742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.831019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.831051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.831236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.831268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.831433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.831463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.831652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.831682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.831873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.831903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.832150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.832182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.832306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.832337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.832522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.832553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.832748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.832779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.832901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.832931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.833217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.833249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.833374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.833404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.833668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.833699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.833814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.833844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.833969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.834126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.834353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.834627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.834772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.834933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.834988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.835111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.835140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.835344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.835374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.835523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.835707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.835738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.835904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.835936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.836154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.836186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.836398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.836427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.836607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.836637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.836870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.836900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.837081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.837113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.837305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.837337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.837527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.837557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.837751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.251 [2024-11-20 15:36:21.837786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.251 qpair failed and we were unable to recover it. 00:27:18.251 [2024-11-20 15:36:21.837968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.838000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.838128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.838159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.838421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.838452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.838591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.838622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.838806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.838835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.839967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.839999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.840120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.840149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.840409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.840442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.840631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.840660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.840857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.840886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.841889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.841920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.842138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.842208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.842425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.842461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.842673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.842848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.842879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.843140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.843174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.843375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.843407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.843576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.843607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.843843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.843873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.844002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.844034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.844167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.844198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.844369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.844399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.844612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.844644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.844826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.844857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.845064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.845097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.845337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.845369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.845778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.845809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.845936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.845977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.846101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.846138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.846330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.846361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.846628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.846660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.846827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.846858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.846976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.847008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.847199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.847230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.847408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.847439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.847632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.847664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.847843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.847873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.848041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.848302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.848333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.252 [2024-11-20 15:36:21.848600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.252 qpair failed and we were unable to recover it. 00:27:18.252 [2024-11-20 15:36:21.848811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.848843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.849060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.849092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.849390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.849421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.849741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.849773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.849904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.849934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.850062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.850095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.850214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.850244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.850369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.850400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.850637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.850668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.850930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.850974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.851152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.851184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.851366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.851398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.851601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.851632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.851820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.851852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.852114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.852329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.852500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.852531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.852707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.852739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.852914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.852944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.853163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.853455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.853486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.853660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.853691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.853808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.853838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.854054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.854190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.854431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.854637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.854850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.854972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.855004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.855175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.855205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.855398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.855428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.855550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.855581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.855821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.855851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.856020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.856052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.856313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.856344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.856526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.856558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.856731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.856762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.856933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.856972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.857231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.857263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.857505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.857535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.857706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.857737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.857914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.857944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.858084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.858114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.858364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.858395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.858514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.858545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.858730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.858760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.858936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.858976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.859187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.859218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.859454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.859485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.859665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.859695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.859877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.859908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.860155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.860188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.860320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.860351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.860547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.860578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.860849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.860918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.861129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.861200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.861429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.861613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.253 [2024-11-20 15:36:21.861850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.253 [2024-11-20 15:36:21.861882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.253 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.862084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.862117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.862305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.862335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.862595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.862807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.862837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.862967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.863313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.863523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.863664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.863807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.863839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.864043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.864075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.864291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.864322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.864559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.864590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.864824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.865966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.865998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.866106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.866138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.866308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.866339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.866472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.866503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.866635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.866666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.866900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.867119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.867151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.867329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.867359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.867457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.867489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.867632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.867663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.867860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.867891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.868139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.868172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.868369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.868399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.868654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.868685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.868885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.868915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.869206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.869238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.869517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.869559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.869813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.869845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.870861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.870893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.871939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.871981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.872252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.872519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.872550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.872821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.873028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.873062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.873195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.873389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.873515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.873547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.873783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.873814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.874008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.874041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.254 qpair failed and we were unable to recover it. 00:27:18.254 [2024-11-20 15:36:21.874161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.254 [2024-11-20 15:36:21.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.874378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.874409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.874634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.874666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.874931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.874972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.875098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.875136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.875253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.875285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.875496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.875528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.875699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.875729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.875985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.876018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.876303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.876334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.876617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.876649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.876775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.876807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.877011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.877043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.877282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.877313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.877578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.877610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.877811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.877842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.878406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.878445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.878639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.878674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.878874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.878905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.879173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.879206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.879466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.879497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.879622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.879652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.879889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.879920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.880114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.880147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.880357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.880388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.880489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.880520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.880693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.880723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.880909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.880941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.881131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.881163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.881347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.881378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.881499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.881530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.881639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.881676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.881913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.882115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.882148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.882326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.882357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.882566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.882596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.882784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.882815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.883017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.883132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.883163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.883350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.883382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.883563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.883595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.883768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.883799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.884059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.884091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.884268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.884300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.884490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.884521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.884716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.884748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.884936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.884977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.885146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.885178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.885356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.885386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.885625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.885656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.885787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.885817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.885920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.885960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.886199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.886231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.886362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.886393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.886566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.886596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.886714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.255 [2024-11-20 15:36:21.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.255 qpair failed and we were unable to recover it. 00:27:18.255 [2024-11-20 15:36:21.886992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.887213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.887355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.887630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.887773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.887934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.887974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.888233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.888265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.888393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.888424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.888605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.888635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.888790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.888893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.888924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.889136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.889168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.889424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.889455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.889559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.889589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.889790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.889910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.890060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.890092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.890244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.890478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.890508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.890629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.890660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.890842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.890873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.891052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.891085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.891256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.891287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.891456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.891487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.891674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.891705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.891883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.891913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.892161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.892193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.892314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.892345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.892460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.892491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.892694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.892725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.892907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.893177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.893208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.893467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.893499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.893738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.893770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.893892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.893922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.894100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.894325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.894357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.894538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.894568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.894757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.894788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.894973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.895005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.895265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.895296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.895480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.895510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.895626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.895658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.895974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.896119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.896154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.896326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.896357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.896654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.896830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.896862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.897035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.897068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.897306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.897337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.897570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.897602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.897812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.897843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.898031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.898062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.898235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.898267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.898404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.898435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.898550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.898581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.898820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.898861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.899100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.899132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.899252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.899283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.899410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.899441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.256 [2024-11-20 15:36:21.899554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.256 [2024-11-20 15:36:21.899584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.256 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.899754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.899785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.900049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.900080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.900257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.900288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.900497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.900528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.900770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.900801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.900987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.901019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.901227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.901403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.901434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.901626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.901656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.901848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.902011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.902344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.902376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.902554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.902583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.902791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.902821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.902956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.902986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.903121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.903153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.903340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.903372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.903501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.903531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.903698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.903729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.903854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.903884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.904008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.904161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.904192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.904434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.904505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.904744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.904780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.904891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.904921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.905130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.905164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.905353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.905383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.905551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.905581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.905689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.905718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.905843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.905875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.906131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.906165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.906338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.906369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.906586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.906776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.906807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.906978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.907287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.907318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.907518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.907548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.907790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.907820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.907920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.907961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.908225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.908255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.908529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.908756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.908787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.909047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.909080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.909249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.909281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.909481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.909511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.909681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.909712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.909958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.910228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.910259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.910442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.910474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.910689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.910719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.910979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.911011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.911249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.911468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.911498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.911761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.911792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.912004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.912036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.912147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.912177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.257 [2024-11-20 15:36:21.912348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.257 [2024-11-20 15:36:21.912378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.257 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.912643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.912675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.912795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.912827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.913087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.913119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.913349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.913379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.913505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.913535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.913722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.913758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.913944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.913984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.914246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.914276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.914465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.914668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.914699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.914800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.914829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.915032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.915062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.915205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.915369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.915401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.915514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.915544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.915739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.915771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.916033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.916065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.916325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.916356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.916478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.916510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.916733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.916958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.916991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.917189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.917219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.917398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.917428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.917592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.917622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.917806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.917838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.917970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.918003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.918189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.918218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.918407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.918437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.918553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.918583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.918863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.918894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.919138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.919171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.919448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.919690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.919721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.919855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.919886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.920065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.920097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.920198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.920227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.920347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.920378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.920639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.920669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.920790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.920821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.921063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.921094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.921274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.921304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.921559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.921589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.921773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.921803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.921930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.921973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.922140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.922171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.922357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.922393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.922600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.922629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.922816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.922847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.923086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.923118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.923230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.923260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.923374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.923404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.923576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.923606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.923791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.923822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.924083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.924116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.924245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.258 [2024-11-20 15:36:21.924276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.258 qpair failed and we were unable to recover it. 00:27:18.258 [2024-11-20 15:36:21.924399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.924428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.924617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.924646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.924885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.924916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.925141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.925172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.925361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.925392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.925633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.925664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.925834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.925863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.925981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.926012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.926207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.926236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.926432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.926463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.926726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.926756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.926890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.926921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.927117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.927148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.927347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.927514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.927544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.927710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.927740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.928017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.928184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.928217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.928339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.928369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.928525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.928738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.928769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.929053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.929204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.929235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.929491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.929522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.929622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.929651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.929777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.929806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.930042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.930075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.930247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.930278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.930457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.930488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.930703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.930994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.931032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.931280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.931311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.931504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.931534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.931671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.931699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.931831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.931859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.931976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.932010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.932275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.932307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.932504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.932607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.932636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.932768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.933036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.933312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.933342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.933542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.933573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.933741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.933771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.933971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.934004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.934213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.934247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.934501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.934533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.934722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.934754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.934985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.935190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.935220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.935461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.935492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.935661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.935692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.935897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.935929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.936168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.936199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.936324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.936354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.936615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.936645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.936907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.936938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.937135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.937165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.937305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.937334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.937518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.937550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.937666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.259 [2024-11-20 15:36:21.937695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.259 qpair failed and we were unable to recover it. 00:27:18.259 [2024-11-20 15:36:21.937870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.938134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.938165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.938356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.938387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.938572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.938603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.938717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.938748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.938932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.938973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.939244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.939276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.939405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.939435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.939622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.939653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.939843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.939880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.940087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.940120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.940252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.940284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.940457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.940487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.940662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.940693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.940902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.940933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.941081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.941114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.941350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.941381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.941644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.941675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.941855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.941887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.942052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.942083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.942278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.942509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.942539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.942748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.942933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.942974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.943158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.943190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.943360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.943391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.943560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.943591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.943723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.943752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.944144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.944292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.944439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.944745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.944876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.944905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.945167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.945334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.945364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.945551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.945583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.945848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.945879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.946012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.946044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.946217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.946248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.946497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.946528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.946651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.946681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.946877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.946907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.947154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.947187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.947355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.947386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.947512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.947541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.947638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.947670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.947836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.947865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.948066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.948099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.948202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.948238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.948496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.948527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.948742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.948773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.948956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.948988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.949127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.949158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.949334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.949364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.949541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.949572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.949687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.949884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.260 [2024-11-20 15:36:21.949914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.260 qpair failed and we were unable to recover it. 00:27:18.260 [2024-11-20 15:36:21.950093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.950124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.950243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.950274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.950534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.950565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.950685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.950716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.950897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.950926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.951133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.951166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.951414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.951445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.951630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.951661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.951904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.952179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.952455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.952486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.952658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.952688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.952799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.952831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.953085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.953119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.953368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.953398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.953694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.953724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.953881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.954062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.954098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.954367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.954634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.954665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.954857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.954889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.955002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.955032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.955215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.955464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.955710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.955741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.955853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.955883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.956002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.956034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.956213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.956245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.956504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.956536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.956718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.957044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.957238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.957275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.957461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.957492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.957692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.957723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.957916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.958085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.958116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.958296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.958328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.958566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.958597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.958770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.958800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.959966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.959998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.960268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.960298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.960595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.960626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.960892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.960923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.961140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.961171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.961285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.961316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.961435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.961465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.961704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.961734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.961922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.961963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.962226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.962258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.962446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.962478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.962663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.962695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.962811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.261 [2024-11-20 15:36:21.962841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.261 qpair failed and we were unable to recover it. 00:27:18.261 [2024-11-20 15:36:21.963100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.963327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.963357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.963552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.963582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.963771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.963802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.963938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.963978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.964166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.964196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.964455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.964486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.964745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.964776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.964915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.964945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.965081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.965111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.965390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.965420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.965603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.965633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.965748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.965903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.965933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.966070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.966108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.966288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.966319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.966487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.966517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.966638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.966669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.966924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.966963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.967111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.967283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.967315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.967528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.967559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.967796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.967827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.968838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.968869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.969079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.969112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.969293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.969325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.969455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.969489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.969661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.969690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.969869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.969901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.970094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.970126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.970312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.970529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.970559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.970663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.970694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.970801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.970832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.971007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.971038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.971275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.971307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.971609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.971713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.971743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.971984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.972017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.972295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.972484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.972515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.972631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.972662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.972860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.972890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.973083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.973114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.973295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.973326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.973589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.973621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.973727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.973969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.974140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.974171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.974433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.974470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.974659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.974689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.974873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.974903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.262 qpair failed and we were unable to recover it. 00:27:18.262 [2024-11-20 15:36:21.975103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.262 [2024-11-20 15:36:21.975135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.975393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.975424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.975553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.975765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.975796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.975980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.976214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.976375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.976642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.976800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.976967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.976999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.977189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.977220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.977495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.977730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.977761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.977998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.978202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.978346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.978496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.978650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.978916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.979170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.979201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.979332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.979362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.979492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.979522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.979735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.979767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.979939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.979981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.980176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.980208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.980326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.980356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.980463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.980494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.980767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.980798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.981071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.981104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.981329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.981360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.981529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.981560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.981747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.981778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.981975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.982008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.982135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.982166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.982353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.982383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.982490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.982521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.982706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.982737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.982988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.983027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.983215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.983247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.983371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.983400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.983662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.983692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.983862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.983893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.984008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.984037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.984248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.984405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.984437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.984616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.984647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.984859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.984890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.985129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.985160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.985401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.985432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.985541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.985571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.985682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.985714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.985902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.985933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.986128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.986158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.986343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.986375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.986545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.986576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.986702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.986732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.986918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.987138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.987169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.987287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.263 [2024-11-20 15:36:21.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.263 qpair failed and we were unable to recover it. 00:27:18.263 [2024-11-20 15:36:21.987435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.987464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.987562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.987592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.987780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.987811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.987912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.987942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.988192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.988223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.988354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.988386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.988577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.988607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.988747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.988777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.988912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.988942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.989137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.989169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.989431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.989463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.989719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.989750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.989939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.990167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.990199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.990406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.990438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.990675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.990706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.990945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.990987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.991114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.991405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.991440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.991679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.991710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.991878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.991908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.992173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.992205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.992441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.992472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.992726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.992756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.992931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.992983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.993172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.993203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.993374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.993405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.993597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.993626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.993883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.993914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.994220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.994410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.994440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.994646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.994677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.994867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.994899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.995078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.995286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.995559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.995700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.995840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.995968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.996000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.996185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.996217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.996422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.996453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.996627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.996657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.996921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.996980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.997220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.997252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.997435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.997465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.997597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.997626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.997810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.997840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.997978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.998011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.998249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.998472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.998502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.998623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.998653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.998841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.998871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.999050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.999084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.999326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.999357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.264 [2024-11-20 15:36:21.999544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.264 [2024-11-20 15:36:21.999575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.264 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:21.999712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:21.999744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:21.999982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.000015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.000143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.000173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.000429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.000465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.000650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.000681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.000849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.000880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.001051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.001084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.001270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.001302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.001436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.001465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.001590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.001620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.001917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.001974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.002181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.002212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.002389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.002420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.002536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.002566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.002782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.002812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.002994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.003027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.003261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.003293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.003401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.003430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.003616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.003647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.003829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.003860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.003981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.004013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.004202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.004235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.004494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.004525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.004756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.004787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.005026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.005058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.005235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.005266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.005453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.005604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.005634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.005896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.005928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.006133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.006464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.006535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.006686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.006722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.006975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.007010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.007299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.007332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.007456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.007486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.007670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.007701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.007946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.007993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.008174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.008205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.008373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.008404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.008531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.008562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.008800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.008831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.008963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.009194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.009225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.009348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.009378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.009623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.009653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.009914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.010131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.010163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.010333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.010363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.010488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.010518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.010695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.010727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.010862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.010892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.011079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.011111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.011289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.011320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.011488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.011519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.011762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.011793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.011979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.012011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.012127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.012157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.012366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.265 [2024-11-20 15:36:22.012403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.265 qpair failed and we were unable to recover it. 00:27:18.265 [2024-11-20 15:36:22.012512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.012543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.012801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.012831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.013018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.013050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.013247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.013278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.013447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.013650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.013680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.013849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.013879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.014094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.014126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.014363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.014394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.014627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.014658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.014826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.014857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.015044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.015076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.015251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.015281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.015400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.015431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.015597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.015628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.015810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.015840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.016043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.016247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.016403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.016617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.016828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.016995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.017027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.017197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.017227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.017432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.017462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.017642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.017672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.017862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.017892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.018149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.018186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.018370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.018400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.018633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.018665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.018876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.018906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.019065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.019275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.019634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.019846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.019978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.020251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.020282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.020504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.020534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.020704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.020734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.020849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.020879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.021173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.021206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.021336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.021367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.021536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.021567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.021686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.021714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.021898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.021929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.022109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.022398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.022430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.022605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.022872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.022904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.023076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.023196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.023227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.023414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.023444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.023704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.023735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.023849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.023880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.266 [2024-11-20 15:36:22.024902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.266 [2024-11-20 15:36:22.024932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.266 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.025911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.026189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.026220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.026474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.026688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.026724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.026975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.027011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.027185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.027217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.027408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.027439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.027729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.027970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.028003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.028185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.028216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.028329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.028360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.028506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.028538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.028824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.028855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.029098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.029130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.029268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.029299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.029473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.029513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.029640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.029670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.029936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.029979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.030152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.030182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.030417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.030448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.030562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.030594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.030782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.030812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.030930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.030970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.031208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.031239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.031437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.031468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.031648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.031681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.031921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.031963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.032081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.032237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.032269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.032403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.032435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.032674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.032705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.032870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.032900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.033094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.033126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.033299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.033329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.033513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.033543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.033726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.033757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.033993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.034026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.034151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.034182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.034442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.034474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.034660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.034691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.034861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.034893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.035032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.035064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.035329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.035400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.035546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.035582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.035716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.035942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.035985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.036193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.036222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.036421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.036661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.036692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.267 [2024-11-20 15:36:22.036811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.267 [2024-11-20 15:36:22.036843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.267 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.036961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.036994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.037179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.037209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.037390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.037420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.037633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.037664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.037859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.037890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.038076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.038117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.038352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.038383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.038625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.038657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.038842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.038873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.039090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.039305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.039456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.039603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.039817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.039993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.040026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.040153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.040185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.040381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.040411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.040582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.040612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.040725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.040756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.041060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.041249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.041280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.041397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.041427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.041544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.041573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.041826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.041858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.042046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.042078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.042314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.042346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.042543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.042574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.042812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.042844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.043081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.043113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.043270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.043441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.043471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.043678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.043707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.043906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.043936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.044085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.044116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.044319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.044550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.044581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.044844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.045051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.045082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.045285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.045315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.045556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.045587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.045802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.045833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.046901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.046933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.047138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.047169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.047351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.047381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.047604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.047713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.047744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.047918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.047959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.048149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.048180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.048371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.048401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.048532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.048563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.048733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.048763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.048936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.048989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.049178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.049209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.049448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.049477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.049702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.049733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.049859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.049890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.050072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.050105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.050309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.050340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.050527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.050557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.050737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.050768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.051057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.268 [2024-11-20 15:36:22.051089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.268 qpair failed and we were unable to recover it. 00:27:18.268 [2024-11-20 15:36:22.051294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.051325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.051438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.051469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.051641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.051671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.051784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.051814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.052008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.052039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.052213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.052244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.052429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.052500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.052783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.053058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.053187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.053219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.053512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.053716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.053747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.053883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.053915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.054046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.054079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.054289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.054320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.054491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.054522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.054652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.054682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.054806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.054838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.055084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.055328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.055359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.055556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.055587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.055771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.055802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.056083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.056115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.056312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.056344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.056585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.056616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.056790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.056821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.057011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.057223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.057254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.057425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.057455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.057623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.057653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.057913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.057945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.058074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.058107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.058295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.058326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.058511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.058550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.058729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.059022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.059055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.059238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.059270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.059582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.059748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.059779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.059898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.059929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.060175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.060207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.060522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.060553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.060728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.060760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.060997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.061146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.061312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.061542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.061817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.061958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.061990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.062147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.062326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.062358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.062546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.062578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.062852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.062883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.063966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.063998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.064112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.064143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.064377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.064413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.064532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.064562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.064732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.064763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.064963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.064997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.065129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.065160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.065272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.065303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.065512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.065543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.065784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.065816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.269 [2024-11-20 15:36:22.066078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.269 [2024-11-20 15:36:22.066110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.269 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.066234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.066265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.066394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.066426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.066690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.066721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.066926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.066964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.067169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.067201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.067386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.067418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.067626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.067656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.067832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.067863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.068029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.068062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.068284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.068405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.068437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.068626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.068656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.068842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.068872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.069062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.069094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.069285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.069317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.069445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.069476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.069740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.069919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.069974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.070092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.070124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.070245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.070278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.070471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.070501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.070667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.070698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.070907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.070937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.071137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.071169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.071381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.071554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.071586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.071841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.071873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.072049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.072186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.072217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.072336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.072368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.072548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.072578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.072749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.072778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.073014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.073084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.073287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.073494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.073526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.073712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.073743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.073872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.073903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.074108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.074141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.074400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.074433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.074666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.074696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.074808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.074840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.075010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.075300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.075451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.075482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.075743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.075775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.075901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.075938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.076187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.076219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.076385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.076416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.076595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.076627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.076822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.076853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.076990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.077022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.077260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.077291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.077464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.077495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.077677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.077708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.077883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.077913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.078128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.078159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.078341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.078371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.078499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.078530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.078697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.270 [2024-11-20 15:36:22.078729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.270 qpair failed and we were unable to recover it. 00:27:18.270 [2024-11-20 15:36:22.078972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.079004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.079186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.079217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.079450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.079481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.079734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.079764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.079898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.079929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.080124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.080155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.080361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.080392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.080581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.080612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.080794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.080825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.081009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.081041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.081236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.081267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.081525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.081556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.081667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.081698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.081902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.081944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.082090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.082122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.082312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.082344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.082475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.082507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.082695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.082726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.082937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.082980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.083102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.083133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.083453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.083751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.083932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.083974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.084252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.084284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.084540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.084571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.084741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.084942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.084994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.085134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.085166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.085408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.085438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.085664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.085695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.085932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.086094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.086124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.086371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.086404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.086534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.086564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.086672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.086703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.086913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.087094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.087125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.087263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.087293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.087474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.087505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.087740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.087771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.088050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.088248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.088403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.088605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.088824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.089214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.089412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.089558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.089692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.089886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.089917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.090091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.090335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.090551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.090689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.090844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.090975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.091009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.091133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.091163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.091428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.091458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.091676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.091707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.091843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.091874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.092078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.092111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.092294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.092326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.092519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.092549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.092734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.271 [2024-11-20 15:36:22.092765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.271 qpair failed and we were unable to recover it. 00:27:18.271 [2024-11-20 15:36:22.092934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.092978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.093109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.093140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.093259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.093291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.093551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.093582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.093848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.093880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.094013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.094045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.094259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.094291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.094547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.094577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.094705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.094736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.094956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.094988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.095268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.095298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.095488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.095520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.095708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.095739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.096033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.096260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.096414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.096625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.096824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.096970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.097108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.097321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.097521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.097762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.097999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.098236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.098267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.098428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.098546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.098577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.098767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.098798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.099001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.099033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.099310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.099342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.099602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.099632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.099915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.099946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.100155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.100187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.100439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.100469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.100595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.100626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.100765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.100795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.100986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.101140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.101404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.101535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.101691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.101966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.101998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.102121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.102162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.102336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.102367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.102506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.102622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.102653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.102920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.103128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.103349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.103381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.103588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.103619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.103884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.103915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.104180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.104250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.104543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.104579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.104838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.104870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.105090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.105341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.105372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.105567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.105598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.272 qpair failed and we were unable to recover it. 00:27:18.272 [2024-11-20 15:36:22.105779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.272 [2024-11-20 15:36:22.105811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.105995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.106236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.106266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.106522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.106732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.106763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.106970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.107131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.107395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.107616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.107814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.107968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.107999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.108108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.108141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.108466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.108497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.108733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.108763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.109006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.109038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.109270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.109303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.109611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.109642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.109923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.109971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.110234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.110266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.110443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.110474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.110747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.110778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.110982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.111205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.111236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.111490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.111521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.111767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.111799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.111997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.112030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.112235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.112266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.112535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.112565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.112798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.112830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.113003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.113035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.113218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.113249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.113503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.113534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.113666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.113697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.113910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.113940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.114188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.114219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.114481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.114513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.114686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.114716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.114958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.114990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.115254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.115285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.115567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.115597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.115876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.115925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.116241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.116272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.116503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.116533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.116713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.116744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.116930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.116970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.117211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.117242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.117476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.117508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.117749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.117779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.118036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.118069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.118357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.118388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.118718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.118965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.119004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.119123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.119153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.119419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.119449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.119657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.119688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.119801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.119830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.120019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.120051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.120261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.120291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.120530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.120560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.120742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.120773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.121067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.121273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.121303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.121535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.273 [2024-11-20 15:36:22.121566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.273 qpair failed and we were unable to recover it. 00:27:18.273 [2024-11-20 15:36:22.121830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.121861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.122157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.122190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.122452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.122483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.122765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.122796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.123081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.123113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.123385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.123415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.123595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.123625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.123837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.123869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.124057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.124089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.124260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.124291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.124564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.124595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.124858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.124889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.125086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.125117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.125326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.125356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.125553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.125824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.125854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.126101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.126134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.274 [2024-11-20 15:36:22.126340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.274 [2024-11-20 15:36:22.126370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.274 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.126673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.126704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.126824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.126854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.126986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.127017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.127188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.127220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.127472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.127502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.127712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.127743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.128018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.128051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.128237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.128267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.128540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.128571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.128855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.128885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.129090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.129128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.129343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.129374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.560 [2024-11-20 15:36:22.129557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.560 [2024-11-20 15:36:22.129588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.560 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.129782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.129813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.130085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.130118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.130306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.130336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.130546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.130577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.130705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.130734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.130937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.131161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.131192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.131364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.131396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.131631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.131662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.131903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.131934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.132201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.132232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.132482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.132512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.132747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.132778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.133009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.133041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.133251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.133281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.133544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.133575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.133828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.133858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.134117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.134149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.134467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.134670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.134702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.134944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.134999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.135286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.135317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.135500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.135531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.135791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.136000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.136032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.136213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.136243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.136441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.136472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.136754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.136784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.137072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.137104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.561 qpair failed and we were unable to recover it. 00:27:18.561 [2024-11-20 15:36:22.137379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.561 [2024-11-20 15:36:22.137410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.137649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.137681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.137862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.137892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.138138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.138169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.138441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.138682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.138713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.139000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.139034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.139289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.139320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.139624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.139660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.139915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.139954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.140170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.140202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.140386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.140585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.140616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.140794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.140826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.141029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.141060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.141183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.141214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.141342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.141374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.141637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.141668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.141997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.142286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.142321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.142582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.142744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.142775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.143048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.143083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.143269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.143300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.143479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.143509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.143778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.143809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.144098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.144130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.144401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.144431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.144719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.144750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.144960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.144992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.145230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.145261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.145497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.145528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.145763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.145794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.145987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.562 [2024-11-20 15:36:22.146019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.562 qpair failed and we were unable to recover it. 00:27:18.562 [2024-11-20 15:36:22.146256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.146489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.146519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.146733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.146763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.146934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.146977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.147248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.147278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.147515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.147546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.147831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.147862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.148107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.148140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.148377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.148408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.148664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.148695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.148878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.148909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.149132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.149166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.149350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.149381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.149641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.149909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.149945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.150252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.150471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.150502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.150788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.150818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.151017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.151049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.151224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.151255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.151521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.151551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.151732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.151762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.152002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.152035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.152276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.152307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.152572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.152603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.152783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.152814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.153059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.153091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.153330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.153361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.153547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.153578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.153863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.153893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.563 qpair failed and we were unable to recover it. 00:27:18.563 [2024-11-20 15:36:22.154168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.563 [2024-11-20 15:36:22.154200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.154441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.154472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.154642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.154673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.154858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.154887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.155173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.155206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.155472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.155504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.155767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.155798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.155972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.156128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.156158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.156401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.156431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.156616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.156646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.156935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.156976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.157177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.157209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.157396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.157428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.157615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.157645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.157767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.157798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.158039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.158069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.158370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.158403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.158593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.158623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.158865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.158896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.159165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.159197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.159436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.159468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.159707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.159739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.159981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.160014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.160280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.160316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.160603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.160633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.160907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.160938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.161138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.161169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.161418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.161449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.161751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.161780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.162045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.162077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.162360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.162392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.162666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.162696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.162964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.162997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.163235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.163439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.163470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.163757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.163788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.564 [2024-11-20 15:36:22.164051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.564 [2024-11-20 15:36:22.164086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.564 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.164385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.164416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.164606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.164638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.164851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.164883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.165126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.165159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.165399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.165430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.165687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.165718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.165915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.165946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.166215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.166246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.166458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.166490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.166669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.166700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.166970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.167002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.167213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.167529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.167722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.167753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.168050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.168083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.168346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.168378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.168584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.168615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.168867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.168898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.169114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.169301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.169331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.169592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.169867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.169900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.170111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.170143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.170405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.170437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.170684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.170906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.170935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.171157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.171357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.171389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.171595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.171625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.171912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.172241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.172273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.565 [2024-11-20 15:36:22.172566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.565 [2024-11-20 15:36:22.172597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.565 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.172771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.172802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.173020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.173215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.173246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.173516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.173547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.173698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.173729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.173980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.174013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.174199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.174231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.174444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.174475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.174695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.174726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.174968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.175000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.175211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.175243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.175367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.175586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.175617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.175899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.176098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.176130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.176396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.176428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.176714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.176745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.176969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.177002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.177288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.177466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.177497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.177764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.177795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.178082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.178117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.178392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.178423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.178707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.178738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.179019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.179052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.179173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.179204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.179374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.179405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.179645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.179677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.179881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.180069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.180102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.180274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.180305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.180567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.180598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.180774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.180805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.180930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.180970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.181188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.181225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.181527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.181558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.181763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.566 [2024-11-20 15:36:22.181793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.566 qpair failed and we were unable to recover it. 00:27:18.566 [2024-11-20 15:36:22.182007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.182039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.182291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.182323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.182576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.182607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.182876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.182907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.183190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.183223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.183505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.183536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.183816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.183955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.183988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.184230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.184261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.184466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.184497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.184771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.184803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.185091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.185123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.185394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.185426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.185549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.185580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.185847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.185877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.186170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.186450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.186481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.186790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.186821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.187137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.187171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.187430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.187462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.187721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.188043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.188075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.188356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.188387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.188629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.188660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.188863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.188895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.189081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.189113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.189359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.189390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.189563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.189595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.189806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.189837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.190133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.190385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.190417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.190711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.190937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.190978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.191221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.191252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.191444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.191769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.191800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.191974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.192006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.567 [2024-11-20 15:36:22.192297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.567 qpair failed and we were unable to recover it. 00:27:18.567 [2024-11-20 15:36:22.192588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.192619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.192923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.193110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.193142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.193336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.193367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.193560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.193592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.193800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.193830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.194040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.194257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.194287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.194480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.194509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.194779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.194809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.195098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.195130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.195428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.195460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.195725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.195756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.196007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.196039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.196331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.196363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.196681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.196713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.196967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.196999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.197258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.197289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.197477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.197512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.197782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.197813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.198007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.198041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.198217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.198248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.198429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.198459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.198719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.198749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.199024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.199056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.199185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.199215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.199538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.199613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.199959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.199998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.200252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.200286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.200503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.200535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.200788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.200820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.201101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.201134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.201413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.201700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.201731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.201927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.201966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.202160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.202193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.202377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.202408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.202619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.568 [2024-11-20 15:36:22.202650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.568 qpair failed and we were unable to recover it. 00:27:18.568 [2024-11-20 15:36:22.202917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.202959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.203544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.203575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.203768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.203799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.204070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.204103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.204317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.204348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.204545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.204576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.204773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.204805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.205004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.205036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.205308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.205340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.205477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.205508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.205752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.205783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.206097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.206129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.206354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.206671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.206933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.206973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.207110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.207142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.207464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.207496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.207742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.207772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.208020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.208052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.208242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.208272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.208544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.208574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.208786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.208817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.209004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.209037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.209322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.209541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.209834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.209865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.210137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.210471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.210548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.210854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.210890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.211192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.211227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.211496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.211528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.211716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.211746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.212023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.212056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.212280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.212312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.212491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.212522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.212698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.212729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.212920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.569 [2024-11-20 15:36:22.212962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.569 qpair failed and we were unable to recover it. 00:27:18.569 [2024-11-20 15:36:22.213231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.213264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.213461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.213493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.213753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.213784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.214076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.214118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.214420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.214633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.214665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.215257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.215288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.215589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.215620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.215910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.215940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.216163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.216196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.216468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.216498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.216685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.216716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.216901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.216932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.217137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.217169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.217441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.217473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.217678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.217710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.218013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.218048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.218306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.218338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.218674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.218937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.218982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.219232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.219264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.219471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.219502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.219779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.219811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.220078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.220112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.220323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.220354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.220630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.220661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.220911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.220943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.221150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.221182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.221377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.221408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.221585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.221659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.221880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.221915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.222139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.222174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.570 [2024-11-20 15:36:22.222313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.570 [2024-11-20 15:36:22.222344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.570 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.222615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.222647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.222896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.222928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.223167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.223199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.223400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.223431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.223620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.223651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.223925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.223967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.224244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.224276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.224526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.224697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.224728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.224997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.225040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.225318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.225349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.225629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.225659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.225875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.225906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.226217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.226249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.226451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.226482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.226755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.226785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.227100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.227132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.227393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.227425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.227704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.227735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.227987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.228020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.228304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.228615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.228646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.228907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.228940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.229243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.229541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.229572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.229800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.229832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.230069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.230102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.230377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.230410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.230690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.230721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.230928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.230969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.231252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.231283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.231477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.231509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.231719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.231998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.232031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.232305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.232336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.232519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.232550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.232846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.232878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.571 [2024-11-20 15:36:22.233155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.571 [2024-11-20 15:36:22.233188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.571 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.233392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.233423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.233675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.233707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.234003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.234297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.234330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.234554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.234584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.234796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.234828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.235128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.235161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.235390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.235422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.235702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.235734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.235919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.235959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.236137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.236445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.236476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.236736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.237040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.237073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.237360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.237390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.237512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.237543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.237760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.237790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.238065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.238098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.238219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.238251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.238529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.238560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.238875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.239163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.239196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.239471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.239503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.239797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.239827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.240128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.240160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.240367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.240399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.240674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.240704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.240826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.240858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.241079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.241112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.241389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.241420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.241713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.241743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.241960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.241993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.242278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.242309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.242584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.242901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.242932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.243215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.243248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.243535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.243565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.572 [2024-11-20 15:36:22.243847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.572 [2024-11-20 15:36:22.243878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.572 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.244159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.244199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.244477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.244509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.244704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.244737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.245012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.245045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.245333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.245365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.245584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.245615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.245888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.246132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.246164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.246462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.246711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.246937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.246979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.247249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.247282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.247484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.247516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.247776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.247808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.248116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.248150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.248385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.248416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.248694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.248726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.248976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.249009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.249266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.249298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.249493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.249524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.249805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.249837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.250040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.250072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.250271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.250302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.250575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.250607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.250823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.251149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.251183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.251455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.251488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.251772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.251804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.252010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.252043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.252345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.252376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.252568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.252599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.252872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.252903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.253105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.253138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.253357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.253389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.253640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.253671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.253939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.254218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.573 qpair failed and we were unable to recover it. 00:27:18.573 [2024-11-20 15:36:22.254422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.573 [2024-11-20 15:36:22.254453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.254709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.254740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.254991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.255024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.255317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.255355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.255653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.255684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.255960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.255993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.256280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.256909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.256941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.257215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.257248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.257540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.257571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.257794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.257826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.258015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.258048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.258309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.258341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.258614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.258645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.258938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.258993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.259192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.259223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.259508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.259539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.259667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.259698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.259968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.260001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.260218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.260250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.260551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.260583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.260819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.261101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.261134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.261409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.261441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.261640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.261671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.261871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.262213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.262515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.262547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.262849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.263101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.263136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.263271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.263302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.263517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.263548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.263788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.264036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.574 [2024-11-20 15:36:22.264069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.574 qpair failed and we were unable to recover it. 00:27:18.574 [2024-11-20 15:36:22.264205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.264236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.264453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.264484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.264611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.264643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2322513 Killed "${NVMF_APP[@]}" "$@" 00:27:18.575 [2024-11-20 15:36:22.264892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.264924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.265227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.265259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:18.575 [2024-11-20 15:36:22.265545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.265578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.265862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.265894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:18.575 [2024-11-20 15:36:22.266182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.575 [2024-11-20 15:36:22.266434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.266466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.575 [2024-11-20 15:36:22.266737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.266770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.575 [2024-11-20 15:36:22.266977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.267012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.267285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.267317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.267535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.267567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.267772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.267804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.268004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.268036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.268315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.268347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.268540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.268571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.268772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.268804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.269078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.269111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.269272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.269581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.269614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.269815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.269846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.270089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.270121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.270327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.270358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.270614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.270646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.270944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.270987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.271205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.271237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.271512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.271542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.271745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.271775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.272057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.272091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.272352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.272384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.272583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.272614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.272867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.272905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.273110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.575 [2024-11-20 15:36:22.273144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.575 qpair failed and we were unable to recover it. 00:27:18.575 [2024-11-20 15:36:22.273416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.273447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.273666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.273697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.273924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.273965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.274120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.274151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2323229 00:27:18.576 [2024-11-20 15:36:22.274302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.274336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.274550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.274586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2323229 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:18.576 [2024-11-20 15:36:22.274809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.274842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2323229 ']' 00:27:18.576 [2024-11-20 15:36:22.275093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.275128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.275321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.576 [2024-11-20 15:36:22.275355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.275632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.576 [2024-11-20 15:36:22.275714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.576 [2024-11-20 15:36:22.276055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.276100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.576 [2024-11-20 15:36:22.276304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.276340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.276495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.276529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.576 [2024-11-20 15:36:22.276805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.276838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.277011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.277045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.277311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.277344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.277621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.277656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.277813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.277844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.278141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.278177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.278380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.278418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.278635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.278680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.278959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.278993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.279292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.279324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.279547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.279579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.279857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.279890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.280115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.280151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.280403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.280434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.280671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.280705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.280974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.281008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.281158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.281191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.281389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.281419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.281713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.281746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.282019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.282053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.282273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.576 [2024-11-20 15:36:22.282306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.576 qpair failed and we were unable to recover it. 00:27:18.576 [2024-11-20 15:36:22.282568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.282600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.282867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.282900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.283145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.283329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.283361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.283647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.283680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.283910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.283943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.284219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.284253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.284454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.284488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.284769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.285070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.285103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.285313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.285346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.285543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.285575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.285723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.285756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.285972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.286005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.286226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.286258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.286482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.286514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.286709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.286742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.287019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.287054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.287316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.287348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.287657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.287690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.287966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.288001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.288154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.288187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.288342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.288376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.288652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.288689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.288877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.288909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.289192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.289442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.289474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.289716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.289969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.290003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.290271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.290302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.290495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.290527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.290721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.290753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.290958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.290991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.291173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.291437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.291470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.291746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.291779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.291982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.292017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.292208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.292241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.577 qpair failed and we were unable to recover it. 00:27:18.577 [2024-11-20 15:36:22.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.577 [2024-11-20 15:36:22.292486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.292667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.292699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.292958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.292992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.293190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.293223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.293425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.293458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.293655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.293687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.293937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.293978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.294178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.294210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.294429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.294461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.294679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.294876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.294909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.295197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.295233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.295455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.295745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.295780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.295986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.296020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.296518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.296556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.296791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.296823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.297029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.297063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.297261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.297292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.297566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.297597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.297810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.297843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.298101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.298135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.298319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.298351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.298627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.298658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.298913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.298946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.299251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.299283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.299480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.299512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.299761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.299793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.299911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.299942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.300258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.300292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.300574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.300606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.300889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.300921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.301203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.301236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.301441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.301473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.301696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.302011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.302045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.302338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.302595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.578 [2024-11-20 15:36:22.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.578 qpair failed and we were unable to recover it. 00:27:18.578 [2024-11-20 15:36:22.302963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.302995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.303177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.303208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.303484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.303517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.303702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.303733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.303989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.304253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.304285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.304480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.304513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.304704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.304735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.304862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.304895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.305140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.305394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.305426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.305609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.305641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.305768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.305800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.306226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.306258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.306461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.306493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.306702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.306733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.306962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.306999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.307204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.307235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.307487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.307520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.307786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.307819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.308130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.308163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.308364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.308396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.308653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.308686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.308945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.308985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.309107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.309139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.309321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.309354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.309598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.309630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.309897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.309929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.310101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.310333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.310372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.310576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.310609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.310805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.579 [2024-11-20 15:36:22.310837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.579 qpair failed and we were unable to recover it. 00:27:18.579 [2024-11-20 15:36:22.311115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.311148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.311272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.311304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.311499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.311532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.311787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.311818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.312051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.312244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.312276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.312479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.312510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.312640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.312672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.312868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.312901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.313129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.313163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.313461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.313493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.313549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184faf0 (9): Bad file descriptor 00:27:18.580 [2024-11-20 15:36:22.313997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.314075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.314218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.314254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.314443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.314478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.314732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.314764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.314988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.315021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.315275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.315307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.315446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.315477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.315688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.315719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.316027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.316061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.316274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.316305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.316556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.316588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.316862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.316894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.317069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.317269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.317301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.317445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.317477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.317678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.317710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.317934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.317974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.318191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.318222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.318367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.318398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.318661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.318692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.318886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.318918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.319126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.319163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.319353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.319385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.319680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.319720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.320011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.320045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.320235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.320267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.580 [2024-11-20 15:36:22.320469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.580 [2024-11-20 15:36:22.320506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.580 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.320700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.320731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.320935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.320978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.321258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.321289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.321559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.321702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.321733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.321939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.321979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.322177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.322321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.322516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.322756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.322964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.322997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.323202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.323235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.323437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.323469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.323701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.323929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.323973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.324204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.324341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.324371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.324567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.324599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.324793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.324824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.325022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.325056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.325254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.325285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.325395] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:18.581 [2024-11-20 15:36:22.325456] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.581 [2024-11-20 15:36:22.325479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.325513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.325736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.325767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.325971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.326003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.326254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.326286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.326480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.326517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.326643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.326674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.326803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.326835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.326970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.327128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.327159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.327351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.327385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.327526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.327558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.327786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.327819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.328020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.328053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.328179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.328211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.328480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.328514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.328794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.581 [2024-11-20 15:36:22.328825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.581 qpair failed and we were unable to recover it. 00:27:18.581 [2024-11-20 15:36:22.329008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.329043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.329250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.329284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.329432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.329465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.329672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.329706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.329884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.329917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.330146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.330329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.330360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.330505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.330536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.330712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.330742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.330919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.330958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.331135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.331167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.331394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.331662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.331692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.331813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.331845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.331978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.332010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.332225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.332383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.332414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.332612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.332644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.332841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.332873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.333152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.333186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.333396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.333427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.333627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.333659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.333856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.333887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.334115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.334149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.334431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.334463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.334724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.334756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.335013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.335046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.335239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.335387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.335419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.335618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.335650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.335843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.335875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.336127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.336161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.336352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.336385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.336604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.336635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.336826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.336858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.336987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.337021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.337164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.337196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.337405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.337437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 15:36:22.337551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.582 [2024-11-20 15:36:22.337583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.337852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.337884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.338089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.338122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.338327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.338602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.338644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.338779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.338812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.338923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.338961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.339097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.339130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.339311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.339536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.339568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.339743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.339776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.339969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.340003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.340179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.340346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.340377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.340575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.340607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.340801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.340967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.341000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.341204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.341237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.341448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.341480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.341673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.341972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.342007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.342140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.342172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.342365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.342396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.342505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.342537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.342726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.343002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.343034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.343283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.343315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.343517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.343550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.343765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.343795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.343993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.344026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.344146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.344177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.344358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.344390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.344575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.344877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.344910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.345162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.345195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.345386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.345417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.345663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.345696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 15:36:22.345970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.583 [2024-11-20 15:36:22.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.346124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.346158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.346347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.346378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.346633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.346666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.346806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.346838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.347053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.347086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.347361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.347393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.347588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.347618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.347750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.347782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.348000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.348033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.348344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.348478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.348509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.348725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.348757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.348930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.348972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.349150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.349183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.349428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.349458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.349592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.349749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.349780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.349985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.350018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.350206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.350239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.350497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.350528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.350645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.350677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.350798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.350830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.351107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.351139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.351273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.351305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.351589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.351620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.351814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.351846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.351993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.352162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.352396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.352607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.352756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.352915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.353201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.353234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.353421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.353454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.353591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.584 [2024-11-20 15:36:22.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 15:36:22.353849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.353881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.354060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.354093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.354269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.354302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.354498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.354531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.354739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.354770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.354986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.355019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.355164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.355198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.355337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.355368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.355488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.355520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.355763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.355984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.356145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.356369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.356595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.356763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.356924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.356965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.357080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.357112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.357312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.357344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.357571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.357836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.357868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.358062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.358094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.358216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.358247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.358388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.358585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.358616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.358831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.358863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.359045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.359078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.359274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.359311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.359435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.359466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.359666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.359696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.359890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.359921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.360125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.360161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.360520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.360654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.360685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.360945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.360985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.361108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.361139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.361322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.361354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.361491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.585 [2024-11-20 15:36:22.361522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.585 qpair failed and we were unable to recover it. 00:27:18.585 [2024-11-20 15:36:22.361716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.361748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.361887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.361918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.362057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.362091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.362220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.362252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.362401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.362534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.362565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.362806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.362837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.363105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.363138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.363323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.363356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.363626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.363658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.363835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.363867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.364053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.364086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.364304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.364337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.364580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.364610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.364897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.364929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.365065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.365096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.365310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.365341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.365583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.365615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.365725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.365758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.365906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.366098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.366131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.366321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.366619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.366650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.366842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.367008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.367042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.367335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.367366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.367585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.367754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.367786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.367971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.368003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.368184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.368217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.368458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.368674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.368705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.368833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.368865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.369106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.369139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.369269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.369301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.369421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.369454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.369643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.369675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.369865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.586 [2024-11-20 15:36:22.369897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.586 qpair failed and we were unable to recover it. 00:27:18.586 [2024-11-20 15:36:22.370122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.370155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.370340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.370371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.370579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.370611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.370886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.370918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.371023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.371054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.371186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.371217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.371457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.371488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.371612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.371643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.371759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.371790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.372033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.372066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.372278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.372310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.372493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.372635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.372667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.372851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.372881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.373164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.373197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.373314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.373346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.373551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.373583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.373710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.373741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.373956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.373995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.374238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.374270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.374460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.374492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.374614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.374646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.374857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.374888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.375083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.375115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.375355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.375387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.375626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.375658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.375879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.376112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.376146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.376319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.376350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.376468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.376500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.376681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.376712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.376977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.377010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.377236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.377269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.377547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.377578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.377780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.377811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.378025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.378057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.378229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.378262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.378453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.378483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.587 [2024-11-20 15:36:22.378721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.587 [2024-11-20 15:36:22.378753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.587 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.378955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.378988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.379162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.379193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.379378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.379410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.379599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.379630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.379734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.379765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.380027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.380059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.380269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.380307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.380497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.380530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.380798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.380829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.381034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.381262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.381294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.381490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.381521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.381695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.381725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.381832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.381865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.382104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.382136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.382274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.382306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.382563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.382780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.382812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.382914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.382944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.383086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.383119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.383321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.383354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.383469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.383502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.383637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.383669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.383907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.384172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.384205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.384442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.384472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.384602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.384634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.384828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.384860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.385101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.385134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.385316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.385348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.385534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.385567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.385756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.385786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.385967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.385999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.386239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.588 [2024-11-20 15:36:22.386277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.588 qpair failed and we were unable to recover it. 00:27:18.588 [2024-11-20 15:36:22.386457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.386488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.386608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.386639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.386835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.386867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.386972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.387267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.387299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.387462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.387492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.387707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.387739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.387981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.388014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.388154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.388187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.388361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.388393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.388509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.388783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.388813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.388995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.389203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.389480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.389788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.389933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.389978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.390159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.390192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.390307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.390338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.390594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.390626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.390809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.390840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.390964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.390998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.391207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.391238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.391427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.391459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.391633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.391663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.391909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.392140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.392174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.392335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.392572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.392603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.392804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.392836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.393107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.393141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.393346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.393584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.393615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.393744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.393775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.394043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.394076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.394319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.394351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.394526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.589 [2024-11-20 15:36:22.394557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.589 qpair failed and we were unable to recover it. 00:27:18.589 [2024-11-20 15:36:22.394736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.394768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.395907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.395937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.396143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.396176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.396303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.396335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.396601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.396633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.396748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.396777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.396993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.397149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.397319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.397351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.397673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.397705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.397961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.398033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.398343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.398378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.398646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.398679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.398865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.398897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.399179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.399214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.399340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.399371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.399559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.399590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.399699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.399730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.399994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.400028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.400226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.400258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.400444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.400475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.400613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.400644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.400905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.400936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.401134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.401315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.401344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.401516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.401547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.401674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.401706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.401967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.402000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.402175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.402206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.402468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.402499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.402632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.402662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.402894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.402924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.590 [2024-11-20 15:36:22.406204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.590 [2024-11-20 15:36:22.406239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.590 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.406503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.406534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.406717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.406748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.407012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.407044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.407162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.407193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.407448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.407481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.407737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.407768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.407906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.407937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.408077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.408106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.408343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.408374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.408636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.408668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.408751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.591 [2024-11-20 15:36:22.408778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.408806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.409013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.409046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.409232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.409263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.409525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.409557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.409673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.409704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.409822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.409852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.410056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.410088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.410303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.410334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.410541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.410573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.410850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.410881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.411114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.411284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.411316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.411555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.411585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.411716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.411747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.411851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.411881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.412063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.412097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.412283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.412314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.412421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.412451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.412706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.412737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.412907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.412939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.413109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.413147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.413336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.413369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.413634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.413834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.413866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.414006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.414040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.414244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.414276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.414513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.591 [2024-11-20 15:36:22.414544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.591 qpair failed and we were unable to recover it. 00:27:18.591 [2024-11-20 15:36:22.414743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.414775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.414985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.415020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.415259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.415290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.415460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.415493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.415629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.415660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.415841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.415872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.416035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.416068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.416335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.416367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.416554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.416585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.416852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.416884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.417166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.417201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.417442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.417475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.417658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.417688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.417874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.417907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.418095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.418129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.418396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.418429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.418535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.418567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.418830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.418863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.418982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.419017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.419281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.419313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.419497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.419537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.419828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.419861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.420105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.420140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.420354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.420387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.420573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.420605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.420809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.420845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.421131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.421167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.421354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.421602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.421634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.421811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.421843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.422022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.422055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.422262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.422294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.422478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.422510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.422625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.422656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.422788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.422820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.423058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.423092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.423218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.423250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 15:36:22.423461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.592 [2024-11-20 15:36:22.423492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.423693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.423725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.423903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.423934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.424135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.424168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.424404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.424436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.424567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.424598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.424781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.424813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.425052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.425085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.425266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.425298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.425399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.425430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.425668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.425965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.425998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.426182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.426427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.426459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.426674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.426706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.426896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.426927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.427198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.427231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.427343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.427374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.427558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.427590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.427852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.427883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.428051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.428086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.428276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.428308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.428574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.428607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.428727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.429032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.429066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.429229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.429439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.429470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.429573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.429810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.429841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.430020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.430052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.430250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.430283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.593 [2024-11-20 15:36:22.430502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 15:36:22.430634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.430667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.430842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.430874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.431012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.431044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.431279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.431312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.431499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.431530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.431794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.431826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.432014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.432048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.432235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.432266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.432388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.432420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.432658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.432690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.432814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.432846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.433018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.433050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.433289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.433322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.433512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.433719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.433751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.433933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.433984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.434220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.434252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.434495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.434527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.434742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.434774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.434967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.435001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.435170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.435203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.435463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.435495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.435624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.435654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.435910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.435942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.436124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.436155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.436332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.436363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.436609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.436640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.436765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.436797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.436980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.437013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.437261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.437293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.437541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.437573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 15:36:22.437782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.594 [2024-11-20 15:36:22.437814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.438012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.438047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.438233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.438266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.438517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.438550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.438815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.438847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.439048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.439081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.439261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.439293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.439476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.439508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.867 qpair failed and we were unable to recover it. 00:27:18.867 [2024-11-20 15:36:22.439719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.867 [2024-11-20 15:36:22.439751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.439866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.439898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.440144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.440177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.440389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.440421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.440595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.440627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.440893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.440925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.441123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.441157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.441338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.441376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.441573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.441604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.441737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.441770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.441889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.441920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.442077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.442109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.442232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.442263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.442368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.442399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.442607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.442639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.442830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.442868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.443043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.443077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.443197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.443230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.443412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.443445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.443569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.443601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.443798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.443830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.444037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.444379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.444537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.444807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.444983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.445017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.445281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.445313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.445495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.445527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.445709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.445742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.445855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.445887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.446036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.446070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.446190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.446223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.446338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.446370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.868 [2024-11-20 15:36:22.446481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.868 [2024-11-20 15:36:22.446519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.868 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.446656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.446689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.446892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.446924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.447118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.447151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.447412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.447445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.447695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.447728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.447829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.447863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.447988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.448023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.448239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.448272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.448533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.448566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.448700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.448733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.448851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.448883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.449081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.449113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.449284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.449316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.449496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.449531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.449771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.449803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.449995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.450029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.450140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.450172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.450435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.450633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.450664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.450801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.450836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.451018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.451052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.451181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.451214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.451483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.451516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.451742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.451859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.451892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.452071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.452106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.452277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.452321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.452493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.452491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.869 [2024-11-20 15:36:22.452523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.869 [2024-11-20 15:36:22.452531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-11-20 15:36:22.452526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 wit only 00:27:18.869 h addr=10.0.0.2, port=4420 00:27:18.869 [2024-11-20 15:36:22.452541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.869 [2024-11-20 15:36:22.452548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.452767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.452798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.452985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.453018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.453194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.453225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.453432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.453464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.453739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.454028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.869 [2024-11-20 15:36:22.454062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.869 qpair failed and we were unable to recover it. 00:27:18.869 [2024-11-20 15:36:22.453987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:18.869 [2024-11-20 15:36:22.454057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:18.869 [2024-11-20 15:36:22.454167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.869 [2024-11-20 15:36:22.454252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.454281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 [2024-11-20 15:36:22.454168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.454434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.454567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.454596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.454859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.454892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.455125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.455159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.455368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.455400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.455586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.455617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.455813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.455845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.456905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.456938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.457087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.457121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.457240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.457272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.457508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.457547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.457797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.457828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.457941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.457985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.458177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.458211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.458434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.458466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.458580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.458613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.458804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.458839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.459873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.459906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.460011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.460044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.460191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.460222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.460475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.460507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.460635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.460667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.460865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.460898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.461124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.461157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.461423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.461455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.461666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.461699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.870 [2024-11-20 15:36:22.461819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.870 [2024-11-20 15:36:22.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.870 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.462026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.462059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.462246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.462277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.462447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.462480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.462721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.462884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.462916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.463114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.463154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.463260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.463291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.463423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.463456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.463697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.463729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.463901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.463932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.464143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.464176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.464316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.464349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.464589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.464620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.464745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.464778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.464971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.465007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.465292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.465533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.465565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.465701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.465733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.465944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.465985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.466870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.466991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.467025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.467214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.467247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.467437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.467469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.467657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.467689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.467915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.468137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.468172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.468299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.468332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.468556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.468734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.468768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.468969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.469003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.469109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.469142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.469260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.469292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.871 [2024-11-20 15:36:22.469457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.871 [2024-11-20 15:36:22.469489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.871 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.469663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.469697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.469867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.469899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.470074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.470108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.470288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.470321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.470493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.470525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.470766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.470799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.470919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.470963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.471083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.471116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.471261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.471320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.471524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.471558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.471663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.471695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.471867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.471898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.472090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.472123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.472390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.472423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.472591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.472621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.472729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.472760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.473084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.473256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.473287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.473393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.473425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.473536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.473569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.473779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.473820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.474082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.474324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.474356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.474592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.474624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.474865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.474899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.475940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.476084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.872 [2024-11-20 15:36:22.476117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.872 qpair failed and we were unable to recover it. 00:27:18.872 [2024-11-20 15:36:22.476306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.476455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.476488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.476629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.476674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.476871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.476906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.477139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.477287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.477447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.477666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.477804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.477993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.478203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.478353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.478576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.478717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.478853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.478885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.479851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.479885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.480893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.480925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.481041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.481074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.481314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.481347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.481472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.481504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.481675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.481707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.481877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.481909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.482914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.873 [2024-11-20 15:36:22.482957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.873 qpair failed and we were unable to recover it. 00:27:18.873 [2024-11-20 15:36:22.483128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.483160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.483275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.483307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.483406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.483439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.483622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.483653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.483781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.483819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.484005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.484040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.484139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.484170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.484451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.484484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.484656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.484689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.484861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.484892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.485116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.485149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.485285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.485317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.485484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.485516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.485650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.485681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.485880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.486118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.486152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.486329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.486360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.486580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.486792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.486847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.487044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.487078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.487255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.487287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.487419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.487449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.487674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.487779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.487810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.488047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.488080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.488222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.488255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.488391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.488423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.488610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.488643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.488896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.488929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.489160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.489194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.489437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.489470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.489662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.489703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.489907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.489940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.490197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.490230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.490423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.490455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.490639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.490669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.490903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.490934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.491186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.491217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.874 qpair failed and we were unable to recover it. 00:27:18.874 [2024-11-20 15:36:22.491478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.874 [2024-11-20 15:36:22.491510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.491760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.491791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.492051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.492084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.492208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.492240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.492426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.492457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.492721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.492752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.493037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.493069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.493263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.493295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.493533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.493565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.493804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.493835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.494052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.494086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.494310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.494341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.494553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.494586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.494704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.494740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.494960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.494995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.495189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.495226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.495375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.495613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.495899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.495934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.496077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.496112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.496312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.496349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.496525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.496560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.496809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.496845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.497019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.497054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.497270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.497459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.497490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.497705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.497892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.497924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.498109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.498145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.498430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.498644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.498676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.498915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.498961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.499248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.499280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.499554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.499592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.499779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.499811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.500044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.500362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.500394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.500529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.875 [2024-11-20 15:36:22.500562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.875 qpair failed and we were unable to recover it. 00:27:18.875 [2024-11-20 15:36:22.500803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.500835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.501025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.501059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.501296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.501329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.501566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.501597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.501727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.501759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.502022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.502055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.502239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.502270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.502532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.502564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.502854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.502886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.503110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.503335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.503367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.503557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.503588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.503847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.503879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.504067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.504100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.504359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.504391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.504686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.504718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.504913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.505131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.505162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.505420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.505452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.505710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.505742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.506020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.506053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.506334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.506366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.506695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.506761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.507038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.507073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.507325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.507357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.507566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.507597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.507860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.507892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.508081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.508302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.508333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.508555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.508779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.508811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.509071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.509102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.509387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.509418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.509696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.510014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.510046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.510339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.510379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.510616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.510648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.510786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.876 [2024-11-20 15:36:22.510817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.876 qpair failed and we were unable to recover it. 00:27:18.876 [2024-11-20 15:36:22.511078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.511348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.511378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.511554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.511585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.511786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.511817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.512006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.512038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.512223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.512461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.512491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.512754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.512786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.513036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.513067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.513254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.513285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.513488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.513519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.513786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.513818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.514006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.514038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.514210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.514239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.514410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.514440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.514622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.514654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.514919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.514972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.515157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.515187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.515433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.515464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.515771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.515803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.516005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.516038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.516299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.516331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.516589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.516621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.516907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.516938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.517213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.517511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.517544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.517748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.517780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.517983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.518017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.518208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.518241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.518503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.518534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.518776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.518808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.519066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.519101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.519242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.519273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.519455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.519488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.877 qpair failed and we were unable to recover it. 00:27:18.877 [2024-11-20 15:36:22.519695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.877 [2024-11-20 15:36:22.519726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.519927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.519968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.520151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.520397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.520428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.520639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.520671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.520829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.520862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.521052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.521084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.521375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.521407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.521666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.521697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.521854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.521885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.522093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.522126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.522317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.522349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.522591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.522807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.522838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.523120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.523153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.523335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.523366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.523590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.523622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.523818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.523854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.523982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.524013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.524272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.524304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.524497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.524529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.524700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.524731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.525001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.525034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.525247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.525392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.525611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.525643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.525901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.525933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.526102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.526255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.526488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.526634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.526855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.526968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.527001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.527179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.527210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.527337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.527369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.527603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.527635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.527838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.527869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.528056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.528088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.878 [2024-11-20 15:36:22.528327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.878 [2024-11-20 15:36:22.528359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.878 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.528534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.528565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.528742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.528773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.528962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.528994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.529199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.529232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.529418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.529450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.529628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.529687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.529963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.529995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.530271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.530303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.530582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.530613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.531027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.531061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.531344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.531375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.531597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.531629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.531824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.531856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.532035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.532068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.532239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.532550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.532582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.532847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.533165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.533360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.533391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.533676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.533708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.533903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.533934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.534138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.534171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.534442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.534474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.534751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.534783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.535034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.535067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.535316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.535348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.535577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.535608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.535788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.535819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.536064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.536098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.536338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.536370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.536559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.536591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.536710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.536741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.537013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.537047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.537327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.537358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.537638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.537669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.537954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.537987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.538099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.879 [2024-11-20 15:36:22.538131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.879 qpair failed and we were unable to recover it. 00:27:18.879 [2024-11-20 15:36:22.538390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.538422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.538611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.538643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.538775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.538806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.538993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.539026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.539202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.539234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.539474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.539505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.539688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.539721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.539915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.539956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.540258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.540302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.540601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.540878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.540910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.541136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.541170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.541459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.541491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.541737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.541767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.541953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.542250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.542282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.542568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.542598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.542844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.542875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.543132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.543165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.543452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.543484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.543755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.543787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.544027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.544266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.544299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.544540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.544572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.544813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.544845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.545111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.545144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.545351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.545383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.545701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.545977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.546010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.546275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.546308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.546549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.546581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.546791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.546823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.547103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.547137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.547415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.547447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.547621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.547653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.547798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.547831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.548093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.548406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.880 [2024-11-20 15:36:22.548438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.880 qpair failed and we were unable to recover it. 00:27:18.880 [2024-11-20 15:36:22.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.548686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.548787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.548818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.549059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.549092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.549385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.549416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.549709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.549739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.549976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.550199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.550230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.550517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.550547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.550812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.550844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.551139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.551172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.551463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.551507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.551727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.552040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.552075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.552342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.552375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.552615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.552645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.552930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.552973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.553260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.553292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.553554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.553586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.553794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.553826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.553966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.553999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.554212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.554243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.554498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.554769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.555091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.555131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.555380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.555413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.555731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.555762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.556012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.556044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.556308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.556340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.556625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.556655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.556930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.556969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.557158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.557189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.557363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.557394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.557497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.557527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 [2024-11-20 15:36:22.557787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.881 [2024-11-20 15:36:22.557817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.881 qpair failed and we were unable to recover it. 00:27:18.881 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.881 [2024-11-20 15:36:22.558122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.558156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:18.882 [2024-11-20 15:36:22.558361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.558393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.558634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.558667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.882 [2024-11-20 15:36:22.558972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.559006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.559230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.559262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.882 [2024-11-20 15:36:22.559452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.559679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.559709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.559961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.559994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.560122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.560154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.560341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.560371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.560613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.560644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.560931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.560971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.561265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.561297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.561480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.561512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.561872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.562123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.562157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.562342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.562374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.562496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.562528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.562791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.562824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.563112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.563352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.563601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.563874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.564099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.564132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.564346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.564378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.564583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.564615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.564806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.564838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.565100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.565135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.565284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.565317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.565604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.565636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.565755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.565787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.566071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.566106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.566301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.566333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.566552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.566585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.566771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.566804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.566992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.567026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.882 [2024-11-20 15:36:22.567246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.882 [2024-11-20 15:36:22.567278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.882 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.567476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.567508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.567775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.567807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.568032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.568218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.568251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.568443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.568479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.568676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.568707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.568886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.569091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.569123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.569268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.569301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.569524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.569645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.569677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.569944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.569984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.570087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.570118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.570329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.570361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.570604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.570634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.570849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.570881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.571073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.571106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.571341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.571373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.571580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.571612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.571731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.571762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.572002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.572035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.572247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.572277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.572523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.572553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.572740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.572772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.572891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.572921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.573047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.573078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.573267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.573299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.573439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.573470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.573667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.573699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.573884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.573915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.574196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.574269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.574536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.574572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.574793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.574826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.574964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.574998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.575184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.575216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.575393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.575424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.575551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.575584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.883 [2024-11-20 15:36:22.575797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.883 [2024-11-20 15:36:22.575829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.883 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.576005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.576038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.576222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.576256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.576374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.576406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.576644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.576676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.576859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.576892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.577084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.577118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.577363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.577401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.577591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.577621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.577734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.577765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.577905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.577937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.578131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.578341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.578372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.578617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.578647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.578812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.578915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.578957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.579858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.579891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.580001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.580032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.580204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.580236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.580428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.580460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.580667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.580940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.580984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.581303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.581520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.581664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.581873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.581995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.582273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.582485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.582641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.582958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.884 [2024-11-20 15:36:22.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.884 qpair failed and we were unable to recover it. 00:27:18.884 [2024-11-20 15:36:22.583120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.583268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.583423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.583567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.583714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.583902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.584917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.584957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.585103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.585302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.585442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.585666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.585974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.586007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.586196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.586228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.586513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.586545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.586719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.586752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.586854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.586885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.587967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.587999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.588124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.588156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.588340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.588371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.588542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.588573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.588702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.588734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.588870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.588903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.589065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.589099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.885 qpair failed and we were unable to recover it. 00:27:18.885 [2024-11-20 15:36:22.589273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.885 [2024-11-20 15:36:22.589304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.589431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.589462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.589564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.589597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.589739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.589870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.589902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.590873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.590905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.591930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.591973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.592914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.592957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.593073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.593104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.593275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.593477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.593509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.593630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.886 [2024-11-20 15:36:22.593663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.593925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.593970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.886 [2024-11-20 15:36:22.594080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.594112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.594289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.594327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 [2024-11-20 15:36:22.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.594479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 [2024-11-20 15:36:22.594583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.886 [2024-11-20 15:36:22.594615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.886 qpair failed and we were unable to recover it. 00:27:18.886 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.887 [2024-11-20 15:36:22.594786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.594818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.594989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.595136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.595288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.595430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.595630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.595854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.595886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.596021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.596054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.596322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.596353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.596531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.596562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.596749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.596782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.596970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.597131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.597272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.597501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.597716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.597938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.597983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.598167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.598199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.598404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.598435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.598554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.598586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.598709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.598740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.598912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.598943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.599152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.599184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.599300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.599342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.599611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.599642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.599819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.599851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.600994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.601244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.601276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.601387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.601419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.887 [2024-11-20 15:36:22.601697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.887 [2024-11-20 15:36:22.601729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.887 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.602012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.602137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.602169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841ba0 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.602325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.602377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdeec000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.602587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.602628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.602834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.602866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.603059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.603091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.603210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.603242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.603478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.603510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.603713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.603744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.603871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.603903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.604117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.604150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.604388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.604420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.604544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.604575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.604772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.604804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.605078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.605387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.605426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.605683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.605716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.605976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.606009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.606144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.606176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.606312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.606343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.606602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.606633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.606905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.607177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.607209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.607396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.607729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.607760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.608048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.608080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.608354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.608386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.608667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.608699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.608902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.608933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.609130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.609162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.609375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.609407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.609738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.609769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.610034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.610067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.610254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.610287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.610517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.610548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.610789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.610820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.888 [2024-11-20 15:36:22.611046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.888 [2024-11-20 15:36:22.611079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.888 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.611323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.611355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.611494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.611526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.611700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.611732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.611944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.611989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.612130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.612161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.612405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.612437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.612723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.612756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.613016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.613049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.613237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.613268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.613492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.613524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.613769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.613801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.614068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.614101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.614327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.614359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.614558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.614590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.614780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.614812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.614990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.615022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.615286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.615318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.615522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.615553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.615674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.615710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.615967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.616000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.616261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.616293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.616477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.616509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.616701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.616733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.616916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.616977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.617244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.617277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.617565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.617598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.618179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.618212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.618411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.618444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.618625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.618657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.618927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.618970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.619241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.619273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.619612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.619644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.619913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.889 [2024-11-20 15:36:22.619944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.889 qpair failed and we were unable to recover it. 00:27:18.889 [2024-11-20 15:36:22.620238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.620271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.620468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.620499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.620692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.620724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.620982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.621016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.621228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.621260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.621469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.621500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.621615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.621646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.621884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.621917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.622229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.622273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.622535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.622567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.622866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.622898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef8000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.623172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.623401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.623434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.623619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.623652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.623915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.623946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.624238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.624270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.624559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.624775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.624807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.625076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.625110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.625354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.625386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.625576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.625607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.625729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.625760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.626013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.626277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.626309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.626627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.626666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 Malloc0 00:27:18.890 [2024-11-20 15:36:22.626894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.626927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.627236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.627269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.627502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.627535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.890 [2024-11-20 15:36:22.627790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:18.890 [2024-11-20 15:36:22.628074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.628108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.890 [2024-11-20 15:36:22.628283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.628317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.628556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.890 [2024-11-20 15:36:22.628587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.628765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.628797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.628932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.628981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.629243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.629275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.629547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.890 [2024-11-20 15:36:22.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.890 qpair failed and we were unable to recover it. 00:27:18.890 [2024-11-20 15:36:22.629867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.629900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.630174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.630207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.630457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.630488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.630750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.630782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.631002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.631215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.631246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.631432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.631463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.631730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.631762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.631972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.632246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.632276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.632543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.632574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.632745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.632776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.633044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.633075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.633311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.633348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.633557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.633589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.633854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.633884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.634134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.634167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.634364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.891 [2024-11-20 15:36:22.634427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.634457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.634693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.635009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.635041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.635277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.635501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.635533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.635768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.635799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.636056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.636089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.636328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.636360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.636545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.636577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.636795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.636832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.637042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.637074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.637244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.637276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.891 [2024-11-20 15:36:22.637457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.891 [2024-11-20 15:36:22.637488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.891 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.637668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.637698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.637969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.638001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.638236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.638267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.638447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.638478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.638736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.638767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.638967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.639000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.639257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.639288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.639539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.639570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.639777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.639808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.640067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.640100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.640394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.640424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.640723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.641010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.641042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.641292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.641322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.641452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.641483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.641746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.642316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.642348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.642604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.642634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.642881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.642912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.892 [2024-11-20 15:36:22.643184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.643216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.892 [2024-11-20 15:36:22.643504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.643534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.892 [2024-11-20 15:36:22.643803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.643835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.892 [2024-11-20 15:36:22.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.644155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.644427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.644458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.644669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.644930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.645203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.645235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.645365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.645395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.645598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.645630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.645828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.645859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.646046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.646077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.646342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.646373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.646629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.646660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.646960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.892 [2024-11-20 15:36:22.647207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.892 [2024-11-20 15:36:22.647238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.892 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.647478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.647509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.647746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.647777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.648023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.648056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.648184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.648216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.648394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.648425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.648613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.648644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.648880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.648910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.649102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.649133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.649420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.649453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.649719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.649750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.649930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.649974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.650234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.650265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.650519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.650550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.650720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.650751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.650931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.650969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.893 [2024-11-20 15:36:22.651252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.651283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.893 [2024-11-20 15:36:22.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.893 [2024-11-20 15:36:22.651862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.893 [2024-11-20 15:36:22.652170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.652203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.652467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.652497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.652709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.652990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.653023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.653195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.653226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.653490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.653528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.653809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.653841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.654033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.654065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.654199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.654230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.654399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.654430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.654692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.654723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.654827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.655132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.655336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.655367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.655505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.655535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.655796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.655826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.656030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.656060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.893 [2024-11-20 15:36:22.656309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.893 [2024-11-20 15:36:22.656340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.893 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.656605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.656636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.656916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.656968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.657224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.657256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.657529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.657559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.657763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.657794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.658046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.658078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.658270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.658301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.658563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.658595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.658882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.658912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.659112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.894 [2024-11-20 15:36:22.659146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.659383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.659414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.894 [2024-11-20 15:36:22.659603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.659635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.894 [2024-11-20 15:36:22.659878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.659916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.660115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.894 [2024-11-20 15:36:22.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.660415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.660446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.660709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.660991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.661023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.661285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.661316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.661601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.661631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.661907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.662171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.662202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.662415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.894 [2024-11-20 15:36:22.662445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdef0000b90 with addr=10.0.0.2, port=4420 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.662612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.894 [2024-11-20 15:36:22.665050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.894 [2024-11-20 15:36:22.665185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.894 [2024-11-20 15:36:22.665231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.894 [2024-11-20 15:36:22.665253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.894 [2024-11-20 15:36:22.665276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.894 [2024-11-20 15:36:22.665329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.894 [2024-11-20 15:36:22.674990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.894 [2024-11-20 15:36:22.675092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.894 [2024-11-20 15:36:22.675133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.894 [2024-11-20 15:36:22.675156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.894 [2024-11-20 15:36:22.675177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.894 [2024-11-20 15:36:22.675224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 15:36:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2322539 00:27:18.894 [2024-11-20 15:36:22.684984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.894 [2024-11-20 15:36:22.685059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.894 [2024-11-20 15:36:22.685084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.894 [2024-11-20 15:36:22.685097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.894 [2024-11-20 15:36:22.685110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.894 [2024-11-20 15:36:22.685140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.894 qpair failed and we were unable to recover it. 00:27:18.894 [2024-11-20 15:36:22.694975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.894 [2024-11-20 15:36:22.695045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.894 [2024-11-20 15:36:22.695063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.894 [2024-11-20 15:36:22.695072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.894 [2024-11-20 15:36:22.695081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.894 [2024-11-20 15:36:22.695102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.704952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.705012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.705027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.705037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.705043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.705058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.714991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.715060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.715074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.715080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.715086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.715101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.724988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.725042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.725056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.725063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.725069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.725084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.735001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.735069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.735083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.735090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.735095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.735111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.745038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.745105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.745119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.745126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.745132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.745150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:18.895 [2024-11-20 15:36:22.755082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.895 [2024-11-20 15:36:22.755139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.895 [2024-11-20 15:36:22.755152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.895 [2024-11-20 15:36:22.755159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.895 [2024-11-20 15:36:22.755165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:18.895 [2024-11-20 15:36:22.755180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:18.895 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.765124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.765174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.765187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.765193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.765200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.765215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.775178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.775237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.775251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.775258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.775264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.775279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.785220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.785279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.785293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.785300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.785307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.785322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.795200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.795261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.795274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.795281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.795287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.795302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.805206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.805263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.805276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.805283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.805289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.805304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.815189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.815247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.815260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.815266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.815272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.815287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.825274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.825336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.825348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.825355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.825361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.825377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.835256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.155 [2024-11-20 15:36:22.835337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.155 [2024-11-20 15:36:22.835353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.155 [2024-11-20 15:36:22.835360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.155 [2024-11-20 15:36:22.835366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.155 [2024-11-20 15:36:22.835380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.155 qpair failed and we were unable to recover it. 00:27:19.155 [2024-11-20 15:36:22.845329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.845381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.845395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.845401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.845407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.845422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.855393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.855447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.855461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.855468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.855473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.855488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.865315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.865368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.865381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.865388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.865394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.865409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.875342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.875393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.875406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.875412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.875422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.875438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.885373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.885425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.885439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.885446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.885452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.885467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.895461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.895516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.895529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.895536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.895542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.895556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.905530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.905615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.905628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.905635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.905641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.905656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.915529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.915584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.915597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.915603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.915610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.915625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.925521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.925574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.925588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.925595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.925601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.925615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.935586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.935643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.935656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.935663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.935669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.935683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.945644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.945700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.945713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.945720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.945726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.945741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.955618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.955673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.955686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.955693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.955699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.955713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.965640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.156 [2024-11-20 15:36:22.965690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.156 [2024-11-20 15:36:22.965706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.156 [2024-11-20 15:36:22.965713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.156 [2024-11-20 15:36:22.965719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.156 [2024-11-20 15:36:22.965733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.156 qpair failed and we were unable to recover it. 00:27:19.156 [2024-11-20 15:36:22.975633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:22.975688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:22.975702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:22.975708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:22.975714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:22.975729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:22.985672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:22.985730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:22.985743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:22.985751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:22.985756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:22.985772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:22.995804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:22.995855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:22.995868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:22.995875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:22.995881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:22.995895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.005715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.005767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.005780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.005787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.005797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.005812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.015749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.015808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.015822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.015829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.015835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.015850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.025883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.025939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.025957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.025963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.025969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.025984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.035847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.035940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.035960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.035967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.035972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.035988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.045892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.045953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.045968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.045974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.045980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.045995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.157 [2024-11-20 15:36:23.055937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.157 [2024-11-20 15:36:23.056005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.157 [2024-11-20 15:36:23.056018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.157 [2024-11-20 15:36:23.056025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.157 [2024-11-20 15:36:23.056031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.157 [2024-11-20 15:36:23.056046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.157 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.065899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.065958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.065972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.065979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.065984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.066000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.075981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.076039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.076052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.076059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.076065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.076080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.085953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.086035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.086048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.086055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.086061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.086076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.096060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.096116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.096132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.096139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.096145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.096159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.106079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.106162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.106175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.106182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.106188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.106203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.116119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.116177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.116190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.116197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.116203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.116218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.418 qpair failed and we were unable to recover it. 00:27:19.418 [2024-11-20 15:36:23.126087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.418 [2024-11-20 15:36:23.126144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.418 [2024-11-20 15:36:23.126158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.418 [2024-11-20 15:36:23.126165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.418 [2024-11-20 15:36:23.126171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.418 [2024-11-20 15:36:23.126185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.136097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.136168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.136182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.136194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.136200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.136216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.146211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.146265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.146279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.146286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.146292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.146307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.156226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.156306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.156318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.156325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.156331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.156345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.166165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.166219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.166232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.166238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.166244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.166259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.176269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.176329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.176342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.176349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.176355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.176374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.186284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.186344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.186356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.186363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.186369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.186384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.196316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.196395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.196407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.196414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.196420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.196435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.206342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.206392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.206405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.206411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.206417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.206432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.216380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.216438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.216451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.216458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.216464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.216478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.226414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.226475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.226489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.226496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.226501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.226516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.236438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.236493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.236506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.236513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.236519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.236534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.246478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.246545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.246558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.246564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.246571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.246586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.419 qpair failed and we were unable to recover it. 00:27:19.419 [2024-11-20 15:36:23.256488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.419 [2024-11-20 15:36:23.256544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.419 [2024-11-20 15:36:23.256557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.419 [2024-11-20 15:36:23.256564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.419 [2024-11-20 15:36:23.256570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.419 [2024-11-20 15:36:23.256584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.266525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.266576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.266589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.266599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.266605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.266620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.276548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.276601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.276614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.276620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.276626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.276641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.286585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.286635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.286649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.286656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.286662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.286677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.296618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.296673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.296686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.296692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.296698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.296713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.306643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.306694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.306707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.306714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.306720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.306738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.420 [2024-11-20 15:36:23.316668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.420 [2024-11-20 15:36:23.316721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.420 [2024-11-20 15:36:23.316734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.420 [2024-11-20 15:36:23.316741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.420 [2024-11-20 15:36:23.316747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.420 [2024-11-20 15:36:23.316762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.420 qpair failed and we were unable to recover it. 00:27:19.680 [2024-11-20 15:36:23.326731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.680 [2024-11-20 15:36:23.326781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.680 [2024-11-20 15:36:23.326795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.680 [2024-11-20 15:36:23.326801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.680 [2024-11-20 15:36:23.326807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.680 [2024-11-20 15:36:23.326822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.680 qpair failed and we were unable to recover it. 00:27:19.680 [2024-11-20 15:36:23.336734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.680 [2024-11-20 15:36:23.336793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.680 [2024-11-20 15:36:23.336808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.680 [2024-11-20 15:36:23.336815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.680 [2024-11-20 15:36:23.336821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.680 [2024-11-20 15:36:23.336836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.680 qpair failed and we were unable to recover it. 00:27:19.680 [2024-11-20 15:36:23.346778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.680 [2024-11-20 15:36:23.346840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.680 [2024-11-20 15:36:23.346853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.680 [2024-11-20 15:36:23.346860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.680 [2024-11-20 15:36:23.346866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.680 [2024-11-20 15:36:23.346881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.680 qpair failed and we were unable to recover it. 00:27:19.680 [2024-11-20 15:36:23.356777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.680 [2024-11-20 15:36:23.356826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.680 [2024-11-20 15:36:23.356839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.680 [2024-11-20 15:36:23.356846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.680 [2024-11-20 15:36:23.356852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.680 [2024-11-20 15:36:23.356866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.680 qpair failed and we were unable to recover it. 00:27:19.680 [2024-11-20 15:36:23.366813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.680 [2024-11-20 15:36:23.366875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.680 [2024-11-20 15:36:23.366887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.366894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.366899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.366914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.376771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.376823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.376836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.376842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.376848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.376863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.386869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.386921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.386934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.386940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.386949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.386965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.396896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.396963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.396980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.396986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.396992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.397007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.406919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.407014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.407027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.407033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.407039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.407055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.416935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.417004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.417017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.417024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.417030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.417045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.427015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.427071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.427084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.427091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.427097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.427111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.437062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.437112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.437125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.437132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.437141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.437155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.447051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.447103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.447115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.447122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.447128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.447144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.457080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.457136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.457149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.457156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.457161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.457176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.467223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.467291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.467304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.467310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.467317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.467331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.477176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.477240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.477253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.477260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.477266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.477281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.487259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.487384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.487398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.487405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.487411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.487427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.497230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.497301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.497315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.497321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.497327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.681 [2024-11-20 15:36:23.497342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.681 qpair failed and we were unable to recover it. 00:27:19.681 [2024-11-20 15:36:23.507238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.681 [2024-11-20 15:36:23.507292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.681 [2024-11-20 15:36:23.507306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.681 [2024-11-20 15:36:23.507312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.681 [2024-11-20 15:36:23.507318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.507333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.517179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.517232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.517245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.517252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.517258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.517273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.527252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.527307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.527324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.527331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.527337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.527352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.537318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.537372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.537386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.537393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.537399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.537413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.547363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.547419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.547433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.547440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.547446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.547461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.557366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.557414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.557427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.557434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.557440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.557456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.567385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.567435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.567448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.567454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.567463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.567478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.682 [2024-11-20 15:36:23.577460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.682 [2024-11-20 15:36:23.577567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.682 [2024-11-20 15:36:23.577581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.682 [2024-11-20 15:36:23.577588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.682 [2024-11-20 15:36:23.577594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.682 [2024-11-20 15:36:23.577609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.682 qpair failed and we were unable to recover it. 00:27:19.942 [2024-11-20 15:36:23.587451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.942 [2024-11-20 15:36:23.587505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.942 [2024-11-20 15:36:23.587518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.942 [2024-11-20 15:36:23.587525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.942 [2024-11-20 15:36:23.587530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.942 [2024-11-20 15:36:23.587545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.942 qpair failed and we were unable to recover it. 00:27:19.942 [2024-11-20 15:36:23.597489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.942 [2024-11-20 15:36:23.597546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.942 [2024-11-20 15:36:23.597559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.942 [2024-11-20 15:36:23.597565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.942 [2024-11-20 15:36:23.597571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.942 [2024-11-20 15:36:23.597585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.942 qpair failed and we were unable to recover it. 00:27:19.942 [2024-11-20 15:36:23.607521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.942 [2024-11-20 15:36:23.607575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.942 [2024-11-20 15:36:23.607589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.942 [2024-11-20 15:36:23.607595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.942 [2024-11-20 15:36:23.607602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.942 [2024-11-20 15:36:23.607616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.942 qpair failed and we were unable to recover it. 00:27:19.942 [2024-11-20 15:36:23.617546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.942 [2024-11-20 15:36:23.617602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.942 [2024-11-20 15:36:23.617616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.942 [2024-11-20 15:36:23.617622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.942 [2024-11-20 15:36:23.617628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.942 [2024-11-20 15:36:23.617643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.942 qpair failed and we were unable to recover it. 00:27:19.942 [2024-11-20 15:36:23.627568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.627624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.627637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.627644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.627649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.627664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.637641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.637695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.637708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.637715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.637721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.637735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.647659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.647740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.647753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.647760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.647765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.647780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.657605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.657662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.657678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.657685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.657690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.657705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.667697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.667760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.667794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.667801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.667807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.667831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.677715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.677766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.677781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.677788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.677794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.677809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.687738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.687792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.687805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.687812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.687818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.687833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.697814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.697869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.697883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.697893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.697899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.697914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.707825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.707883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.707897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.707904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.707910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.707925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.717859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.717915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.717929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.717935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.717942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.717965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.727868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.727921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.727935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.727942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.727951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.727966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.737899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.737964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.737978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.737985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.737991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.738009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.943 [2024-11-20 15:36:23.747853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.943 [2024-11-20 15:36:23.747911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.943 [2024-11-20 15:36:23.747924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.943 [2024-11-20 15:36:23.747931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.943 [2024-11-20 15:36:23.747937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.943 [2024-11-20 15:36:23.747954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.943 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.757946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.758002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.758015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.758022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.758027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.758042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.767975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.768026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.768039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.768046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.768052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.768067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.778032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.778095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.778108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.778115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.778121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.778136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.788033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.788095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.788109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.788116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.788122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.788136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.798098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.798144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.798157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.798163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.798169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.798183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.808098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.808150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.808163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.808170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.808176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.808189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.818167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.818237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.818250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.818257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.818263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.818277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.828155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.828206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.828219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.828228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.828234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.828249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:19.944 [2024-11-20 15:36:23.838181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.944 [2024-11-20 15:36:23.838239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.944 [2024-11-20 15:36:23.838252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.944 [2024-11-20 15:36:23.838259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.944 [2024-11-20 15:36:23.838265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:19.944 [2024-11-20 15:36:23.838279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:19.944 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.848213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.848267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.848280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.848286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.848292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.848307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.858243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.858298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.858311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.858318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.858324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.858338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.868277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.868333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.868346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.868353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.868359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.868376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.878294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.878393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.878406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.878412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.878418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.878433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.888300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.888355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.888368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.888374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.888380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.888395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.898344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.898399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.898413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.898419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.898425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.204 [2024-11-20 15:36:23.898440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-11-20 15:36:23.908378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.204 [2024-11-20 15:36:23.908431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.204 [2024-11-20 15:36:23.908444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.204 [2024-11-20 15:36:23.908450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.204 [2024-11-20 15:36:23.908457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.908471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.918405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.918454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.918467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.918474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.918480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.918494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.928348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.928402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.928415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.928422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.928428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.928442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.938395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.938449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.938462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.938469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.938475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.938489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.948495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.948554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.948567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.948574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.948580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.948595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.958524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.958577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.958596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.958603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.958609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.958624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.968559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.968609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.968622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.968629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.968635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.968649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.978510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.978566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.978580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.978586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.978592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.978607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.988612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.988671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.988684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.988690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.988696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.988711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:23.998627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:23.998681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:23.998694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:23.998700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:23.998709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:23.998723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:24.008710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:24.008781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:24.008795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:24.008802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:24.008808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:24.008822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:24.018713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:24.018773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:24.018787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:24.018794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:24.018800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:24.018814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:24.028728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:24.028782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:24.028795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:24.028802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:24.028808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:24.028822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-11-20 15:36:24.038777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.205 [2024-11-20 15:36:24.038831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.205 [2024-11-20 15:36:24.038844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.205 [2024-11-20 15:36:24.038850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.205 [2024-11-20 15:36:24.038856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.205 [2024-11-20 15:36:24.038871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.048782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.048839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.048853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.048860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.048866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.048880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.058818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.058871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.058885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.058891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.058898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.058912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.068847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.068900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.068913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.068920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.068926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.068940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.078872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.078927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.078940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.078950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.078957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.078971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.088861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.088909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.088926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.088932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.088938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.088956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-11-20 15:36:24.098941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.206 [2024-11-20 15:36:24.099003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.206 [2024-11-20 15:36:24.099015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.206 [2024-11-20 15:36:24.099022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.206 [2024-11-20 15:36:24.099028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.206 [2024-11-20 15:36:24.099042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.108976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.109028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.109041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.109048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.109055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.109070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.119027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.119101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.119114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.119121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.119127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.119142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.129026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.129086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.129099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.129106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.129115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.129130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.139049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.139110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.139123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.139130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.139136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.139150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.149080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.149148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.149161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.149168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.149173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.149188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.159118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.159185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.159198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.159204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.159210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.159225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.169132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.466 [2024-11-20 15:36:24.169187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.466 [2024-11-20 15:36:24.169200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.466 [2024-11-20 15:36:24.169207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.466 [2024-11-20 15:36:24.169213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.466 [2024-11-20 15:36:24.169228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.466 qpair failed and we were unable to recover it. 00:27:20.466 [2024-11-20 15:36:24.179173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.179228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.179241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.179248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.179254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.179268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.189213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.189264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.189277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.189283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.189289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.189304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.199225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.199305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.199317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.199324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.199330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.199344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.209267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.209324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.209337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.209344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.209350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.209364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.219284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.219346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.219360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.219366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.219372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.219386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.229344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.229401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.229414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.229420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.229426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.229441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.239375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.239431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.239444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.239451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.239457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.239471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.249318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.249371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.249384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.249391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.249397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.249412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.259452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.259512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.259525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.259535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.259541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.259555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.269446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.269506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.269519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.269526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.269532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.269548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.279483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.279548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.279562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.279568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.279574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.279589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.289536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.289589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.289603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.289610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.289617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.289632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.467 [2024-11-20 15:36:24.299506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.467 [2024-11-20 15:36:24.299562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.467 [2024-11-20 15:36:24.299575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.467 [2024-11-20 15:36:24.299582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.467 [2024-11-20 15:36:24.299588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.467 [2024-11-20 15:36:24.299605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.467 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.309566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.309622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.309635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.309642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.309648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.309662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.319516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.319573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.319586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.319592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.319599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.319613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.329602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.329654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.329666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.329673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.329679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.329693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.339580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.339634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.339648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.339655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.339662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.339676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.349683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.349740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.349753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.349760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.349766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.349781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.359689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.359746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.359759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.359766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.359772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.359787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.468 [2024-11-20 15:36:24.369719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.468 [2024-11-20 15:36:24.369780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.468 [2024-11-20 15:36:24.369793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.468 [2024-11-20 15:36:24.369800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.468 [2024-11-20 15:36:24.369806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.468 [2024-11-20 15:36:24.369821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.468 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.379763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.379832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.379845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.379852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.379858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.379873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.389806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.389863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.389876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.389886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.389892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.389907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.399806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.399858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.399871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.399878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.399884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.399899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.409828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.409884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.409898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.409905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.409910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.409926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.419851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.419909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.419922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.419929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.419935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.419953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.429890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.429945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.429963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.429969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.429976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.429994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.439907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.439968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.439981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.439989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.439995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.440010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.449918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.449980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.449994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.728 [2024-11-20 15:36:24.450000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.728 [2024-11-20 15:36:24.450006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.728 [2024-11-20 15:36:24.450021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.728 qpair failed and we were unable to recover it. 00:27:20.728 [2024-11-20 15:36:24.459914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.728 [2024-11-20 15:36:24.459977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.728 [2024-11-20 15:36:24.459990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.459997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.460003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.460018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.469999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.470050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.470064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.470070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.470076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.470091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.480034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.480090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.480103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.480110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.480116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.480130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.490078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.490128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.490141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.490147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.490153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.490168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.500115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.500170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.500183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.500190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.500196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.500211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.510123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.510178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.510192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.510199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.510205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.510220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.520107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.520174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.520190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.520198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.520203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.520220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.530169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.530225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.530239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.530245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.530252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.530266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.540159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.540227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.540240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.540247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.540253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.540268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.550236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.550293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.550306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.550313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.550319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.550333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.560202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.560254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.560267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.560274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.560284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.560298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.570267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.570324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.570337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.570344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.570350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.570365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.580310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.580366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.580378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.729 [2024-11-20 15:36:24.580385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.729 [2024-11-20 15:36:24.580391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.729 [2024-11-20 15:36:24.580406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.729 qpair failed and we were unable to recover it. 00:27:20.729 [2024-11-20 15:36:24.590350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.729 [2024-11-20 15:36:24.590408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.729 [2024-11-20 15:36:24.590421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.730 [2024-11-20 15:36:24.590428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.730 [2024-11-20 15:36:24.590434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.730 [2024-11-20 15:36:24.590449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.730 qpair failed and we were unable to recover it. 00:27:20.730 [2024-11-20 15:36:24.600302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.730 [2024-11-20 15:36:24.600356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.730 [2024-11-20 15:36:24.600368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.730 [2024-11-20 15:36:24.600375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.730 [2024-11-20 15:36:24.600381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.730 [2024-11-20 15:36:24.600396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.730 qpair failed and we were unable to recover it. 00:27:20.730 [2024-11-20 15:36:24.610389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.730 [2024-11-20 15:36:24.610443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.730 [2024-11-20 15:36:24.610456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.730 [2024-11-20 15:36:24.610463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.730 [2024-11-20 15:36:24.610469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.730 [2024-11-20 15:36:24.610484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.730 qpair failed and we were unable to recover it. 00:27:20.730 [2024-11-20 15:36:24.620426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.730 [2024-11-20 15:36:24.620484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.730 [2024-11-20 15:36:24.620498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.730 [2024-11-20 15:36:24.620504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.730 [2024-11-20 15:36:24.620511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.730 [2024-11-20 15:36:24.620525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.730 qpair failed and we were unable to recover it. 00:27:20.730 [2024-11-20 15:36:24.630372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.730 [2024-11-20 15:36:24.630427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.730 [2024-11-20 15:36:24.630440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.730 [2024-11-20 15:36:24.630447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.730 [2024-11-20 15:36:24.630454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.730 [2024-11-20 15:36:24.630468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.730 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.640487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.640543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.640556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.640562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.640568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.640582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.650498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.650554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.650570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.650577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.650584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.650598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.660516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.660578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.660591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.660598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.660604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.660618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.670546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.670604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.670617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.670625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.670630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.670645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.680547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.680640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.680654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.680660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.680666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.680681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.690615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.690672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.690685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.690692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.690703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.690718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.990 [2024-11-20 15:36:24.700658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.990 [2024-11-20 15:36:24.700742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.990 [2024-11-20 15:36:24.700756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.990 [2024-11-20 15:36:24.700763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.990 [2024-11-20 15:36:24.700769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.990 [2024-11-20 15:36:24.700783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.990 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.710619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.710671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.710684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.710690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.710696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.710711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.720714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.720796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.720810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.720817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.720823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.720837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.730742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.730802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.730815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.730821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.730827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.730842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.740766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.740822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.740836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.740843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.740849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.740863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.750888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.750957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.750973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.750980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.750986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.751002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.760770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.760827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.760841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.760848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.760854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.760868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.770877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.770928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.770942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.770953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.770960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.770975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.780878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.780938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.780954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.780961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.780967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.780982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.790896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.790956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.790970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.790976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.790982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.790996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.800929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.800983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.800997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.801003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.801009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.801024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.810956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.811010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.811023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.811029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.811035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.811050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.820998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.821066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.821079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.821089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.821095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.991 [2024-11-20 15:36:24.821110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.991 qpair failed and we were unable to recover it. 00:27:20.991 [2024-11-20 15:36:24.830953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.991 [2024-11-20 15:36:24.831010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.991 [2024-11-20 15:36:24.831023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.991 [2024-11-20 15:36:24.831030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.991 [2024-11-20 15:36:24.831036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.831050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.841046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.841098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.841112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.841118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.841124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.841138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.851066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.851119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.851132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.851139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.851145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.851159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.861099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.861154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.861167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.861174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.861180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.861198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.871144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.871198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.871211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.871218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.871224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.871238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.881160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.881211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.881224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.881231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.881237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.881252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:20.992 [2024-11-20 15:36:24.891231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.992 [2024-11-20 15:36:24.891286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.992 [2024-11-20 15:36:24.891300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.992 [2024-11-20 15:36:24.891306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.992 [2024-11-20 15:36:24.891312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:20.992 [2024-11-20 15:36:24.891327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.992 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.901221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.901274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.901288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.901294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.901300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.901315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.911244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.911304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.911317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.911324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.911329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.911344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.921265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.921319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.921333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.921339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.921345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.921360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.931297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.931352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.931365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.931371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.931377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.931392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.941344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.941402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.941415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.941421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.941427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.941442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.951366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.951420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.951435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.951442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.951448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.252 [2024-11-20 15:36:24.951462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.252 qpair failed and we were unable to recover it. 00:27:21.252 [2024-11-20 15:36:24.961388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.252 [2024-11-20 15:36:24.961436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.252 [2024-11-20 15:36:24.961448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.252 [2024-11-20 15:36:24.961455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.252 [2024-11-20 15:36:24.961461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:24.961474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:24.971419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:24.971474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:24.971487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:24.971493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:24.971499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:24.971513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:24.981450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:24.981506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:24.981519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:24.981525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:24.981532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:24.981546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:24.991480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:24.991541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:24.991554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:24.991560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:24.991566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:24.991583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.001492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.001543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.001557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.001563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.001569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.001584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.011575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.011623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.011637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.011643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.011650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.011664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.021570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.021627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.021640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.021646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.021652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.021667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.031592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.031642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.031655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.031661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.031667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.031682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.041667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.041720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.041733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.041740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.041746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.041760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.051658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.051708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.051721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.051727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.253 [2024-11-20 15:36:25.051733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.253 [2024-11-20 15:36:25.051748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.253 qpair failed and we were unable to recover it. 00:27:21.253 [2024-11-20 15:36:25.061690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.253 [2024-11-20 15:36:25.061745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.253 [2024-11-20 15:36:25.061758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.253 [2024-11-20 15:36:25.061765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.061771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.061786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.071720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.071777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.071790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.071797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.071803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.071817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.081761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.081822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.081838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.081845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.081851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.081865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.091774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.091832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.091846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.091852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.091858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.091872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.101810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.101867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.101881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.101887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.101893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.101907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.111840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.111903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.111916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.111923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.111928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.111943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.121866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.121920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.121934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.121941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.121954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.121970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.131887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.131939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.131956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.131963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.131969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.131984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.141926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.141986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.142000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.142007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.142013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.142027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.254 [2024-11-20 15:36:25.151928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.254 [2024-11-20 15:36:25.151985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.254 [2024-11-20 15:36:25.151998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.254 [2024-11-20 15:36:25.152005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.254 [2024-11-20 15:36:25.152011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.254 [2024-11-20 15:36:25.152026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.254 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.161998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.162048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.162061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.162068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.162074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.162088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.172019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.172076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.172090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.172097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.172103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.172118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.182051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.182105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.182118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.182125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.182132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.182147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.192108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.192161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.192174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.192181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.192187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.192202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.202108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.202162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.202175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.202182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.202188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.202202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.212142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.212197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.212213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.212220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.212226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.212240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.222180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.222235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.222248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.222254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.222261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.222275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.232207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.232259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.232273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.232279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.232285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.232300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.242231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.242286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.242299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.514 [2024-11-20 15:36:25.242307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.514 [2024-11-20 15:36:25.242312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.514 [2024-11-20 15:36:25.242327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.514 qpair failed and we were unable to recover it. 00:27:21.514 [2024-11-20 15:36:25.252262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.514 [2024-11-20 15:36:25.252318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.514 [2024-11-20 15:36:25.252332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.252341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.252347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.252362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.262294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.262355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.262368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.262376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.262381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.262397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.272356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.272414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.272428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.272434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.272441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.272455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.282271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.282327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.282340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.282347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.282353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.282368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.292358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.292411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.292425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.292431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.292437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.292452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.302413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.302492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.302505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.302512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.302518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.302532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.312406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.312460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.312473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.312480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.312486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.312501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.322409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.322461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.322474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.322481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.322487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.322501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.332462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.332518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.332531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.332538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.332544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.332559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.342531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.342603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.342618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.342624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.342631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.342645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.352574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.352632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.352645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.352652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.352659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.352674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.362558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.362616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.362630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.362636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.362642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.362657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.372595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.372649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.372662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.372669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.372675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.372690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.382616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.382670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.382683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.382693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.382699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.382714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.392644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.392697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.392710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.392717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.392723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.392737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.402713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.402768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.402781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.402788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.402794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.402808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.515 [2024-11-20 15:36:25.412698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.515 [2024-11-20 15:36:25.412787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.515 [2024-11-20 15:36:25.412800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.515 [2024-11-20 15:36:25.412807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.515 [2024-11-20 15:36:25.412812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.515 [2024-11-20 15:36:25.412827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.515 qpair failed and we were unable to recover it. 00:27:21.775 [2024-11-20 15:36:25.422773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.775 [2024-11-20 15:36:25.422828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.775 [2024-11-20 15:36:25.422842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.775 [2024-11-20 15:36:25.422849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.775 [2024-11-20 15:36:25.422855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.775 [2024-11-20 15:36:25.422872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.775 qpair failed and we were unable to recover it. 00:27:21.775 [2024-11-20 15:36:25.432750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.775 [2024-11-20 15:36:25.432801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.432814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.432821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.432827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.432841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.442842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.442944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.442962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.442969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.442974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.442989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.452816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.452872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.452885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.452892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.452898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.452913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.462871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.462964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.462978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.462984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.462990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.463006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.473007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.473073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.473086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.473093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.473099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.473114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.482929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.482986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.482999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.483006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.483012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.483026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.492969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.493023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.493037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.493043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.493050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.493065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.503016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.503075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.503089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.503096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.503102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.503116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.513226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.513280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.513297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.513304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.513310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.513324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.522968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.523024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.523038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.523045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.523051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.523066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.533037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.533093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.533106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.533112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.533119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.533133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.543002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.543056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.543069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.776 [2024-11-20 15:36:25.543076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.776 [2024-11-20 15:36:25.543082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.776 [2024-11-20 15:36:25.543097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.776 qpair failed and we were unable to recover it. 00:27:21.776 [2024-11-20 15:36:25.553123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.776 [2024-11-20 15:36:25.553176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.776 [2024-11-20 15:36:25.553189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.553196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.553202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.553221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.563111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.563168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.563182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.563189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.563195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.563209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.573141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.573200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.573213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.573220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.573225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.573240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.583130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.583189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.583202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.583209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.583215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.583230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.593130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.593190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.593203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.593210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.593216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.593231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.603228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.603282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.603296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.603302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.603308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.603322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.613179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.613232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.613245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.613252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.613258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.613272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.623273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.623341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.623355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.623362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.623367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.623381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.633303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.633359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.633373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.633379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.633385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.633400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.643333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.643384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.643402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.643410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.643418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.643434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.653360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.653412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.653425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.653432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.653437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.653452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.663372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.663429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.663443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.663449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.663455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.663470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:21.777 [2024-11-20 15:36:25.673347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.777 [2024-11-20 15:36:25.673406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.777 [2024-11-20 15:36:25.673419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.777 [2024-11-20 15:36:25.673425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.777 [2024-11-20 15:36:25.673431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:21.777 [2024-11-20 15:36:25.673446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.777 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.683398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.683454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.683467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.683474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.683483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.683498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.693419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.693476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.693489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.693496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.693502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.693517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.703491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.703550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.703563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.703569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.703575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.703590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.713525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.713583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.713605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.713612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.713618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.713639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.723574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.723645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.723659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.723666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.723672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.723688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.733562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.733619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.733632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.733639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.733645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.733660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.743629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.743687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.743701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.743707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.038 [2024-11-20 15:36:25.743713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.038 [2024-11-20 15:36:25.743728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.038 qpair failed and we were unable to recover it. 00:27:22.038 [2024-11-20 15:36:25.753664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.038 [2024-11-20 15:36:25.753725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.038 [2024-11-20 15:36:25.753738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.038 [2024-11-20 15:36:25.753745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.753751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.753766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.763678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.763734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.763747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.763754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.763760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.763774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.773637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.773693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.773710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.773716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.773722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.773737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.783682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.783757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.783770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.783776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.783782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.783798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.793824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.793928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.793941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.793952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.793959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.793974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.803799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.803846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.803859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.803866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.803872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.803887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.813724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.813791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.813804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.813814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.813820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.813835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.823778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.823886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.823899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.823906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.823912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.823927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.833869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.833921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.833935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.833941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.833952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.833967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.843882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.843962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.843975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.843982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.843988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.844002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.853894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.853944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.853962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.853968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.853975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.853990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.863984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.864038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.864052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.864058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.864064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.864079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.873931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.874024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.874038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.874044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.039 [2024-11-20 15:36:25.874050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.039 [2024-11-20 15:36:25.874065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.039 qpair failed and we were unable to recover it. 00:27:22.039 [2024-11-20 15:36:25.883999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.039 [2024-11-20 15:36:25.884060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.039 [2024-11-20 15:36:25.884073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.039 [2024-11-20 15:36:25.884079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.884085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.884100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.040 [2024-11-20 15:36:25.894004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.040 [2024-11-20 15:36:25.894060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.040 [2024-11-20 15:36:25.894073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.040 [2024-11-20 15:36:25.894079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.894086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.894100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.040 [2024-11-20 15:36:25.904049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.040 [2024-11-20 15:36:25.904110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.040 [2024-11-20 15:36:25.904123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.040 [2024-11-20 15:36:25.904129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.904136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.904151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.040 [2024-11-20 15:36:25.914024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.040 [2024-11-20 15:36:25.914076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.040 [2024-11-20 15:36:25.914091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.040 [2024-11-20 15:36:25.914097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.914103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.914118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.040 [2024-11-20 15:36:25.924140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.040 [2024-11-20 15:36:25.924194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.040 [2024-11-20 15:36:25.924207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.040 [2024-11-20 15:36:25.924214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.924220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.924234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.040 [2024-11-20 15:36:25.934072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.040 [2024-11-20 15:36:25.934126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.040 [2024-11-20 15:36:25.934139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.040 [2024-11-20 15:36:25.934146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.040 [2024-11-20 15:36:25.934151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.040 [2024-11-20 15:36:25.934166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.040 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.944159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.944213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.944226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.944236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.944242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.944256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.954238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.954295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.954309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.954315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.954322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.954337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.964251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.964307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.964320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.964327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.964333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.964348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.974276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.974357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.974370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.974377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.974383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.974398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.984325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.984378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.984391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.984398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.984404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.984422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:25.994337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:25.994448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:25.994461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:25.994468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:25.994474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:25.994489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:26.004277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:26.004338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:26.004351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:26.004358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:26.004364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:26.004379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:26.014405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:26.014459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:26.014472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:26.014478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:26.014484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.300 [2024-11-20 15:36:26.014500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.300 qpair failed and we were unable to recover it. 00:27:22.300 [2024-11-20 15:36:26.024374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.300 [2024-11-20 15:36:26.024446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.300 [2024-11-20 15:36:26.024460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.300 [2024-11-20 15:36:26.024467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.300 [2024-11-20 15:36:26.024472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.024487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.034384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.034439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.034453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.034459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.034466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.034480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.044450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.044502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.044515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.044522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.044528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.044542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.054484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.054536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.054549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.054555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.054562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.054577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.064450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.064504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.064518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.064524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.064530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.064545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.074471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.074528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.074544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.074550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.074556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.074571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.084596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.084649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.084662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.084668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.084675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.084689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.094605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.094677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.094690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.094697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.094703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.094717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.104632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.104685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.104698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.104705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.104710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.104725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.114657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.114715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.114728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.114734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.114744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.114759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.124675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.124732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.124746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.124753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.124759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.124774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.134686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.134736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.134750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.134757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.134763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.134777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.144697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.144750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.144763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.144770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.144776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.301 [2024-11-20 15:36:26.144790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.301 qpair failed and we were unable to recover it. 00:27:22.301 [2024-11-20 15:36:26.154770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.301 [2024-11-20 15:36:26.154824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.301 [2024-11-20 15:36:26.154837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.301 [2024-11-20 15:36:26.154843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.301 [2024-11-20 15:36:26.154849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.302 [2024-11-20 15:36:26.154863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.302 qpair failed and we were unable to recover it. 00:27:22.302 [2024-11-20 15:36:26.164809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.302 [2024-11-20 15:36:26.164866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.302 [2024-11-20 15:36:26.164879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.302 [2024-11-20 15:36:26.164886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.302 [2024-11-20 15:36:26.164893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.302 [2024-11-20 15:36:26.164907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.302 qpair failed and we were unable to recover it. 00:27:22.302 [2024-11-20 15:36:26.174833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.302 [2024-11-20 15:36:26.174887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.302 [2024-11-20 15:36:26.174900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.302 [2024-11-20 15:36:26.174907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.302 [2024-11-20 15:36:26.174913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.302 [2024-11-20 15:36:26.174928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.302 qpair failed and we were unable to recover it. 00:27:22.302 [2024-11-20 15:36:26.184896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.302 [2024-11-20 15:36:26.184956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.302 [2024-11-20 15:36:26.184970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.302 [2024-11-20 15:36:26.184976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.302 [2024-11-20 15:36:26.184982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.302 [2024-11-20 15:36:26.184998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.302 qpair failed and we were unable to recover it. 00:27:22.302 [2024-11-20 15:36:26.194899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.302 [2024-11-20 15:36:26.194965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.302 [2024-11-20 15:36:26.194979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.302 [2024-11-20 15:36:26.194986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.302 [2024-11-20 15:36:26.194992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.302 [2024-11-20 15:36:26.195006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.302 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.204929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.204990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.205009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.205016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.205021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.205036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.214882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.214933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.214951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.214958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.214964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.214979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.224987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.225041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.225055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.225062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.225067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.225082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.235044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.235110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.235124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.235130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.235136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.235151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.245022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.245076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.245089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.245096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.245105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.245120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.255056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.255108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.255121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.255127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.255133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.255148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.265121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.265180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.265192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.265199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.265205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.265220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.275091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.275151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.275163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.275170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.275176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.562 [2024-11-20 15:36:26.275191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.562 qpair failed and we were unable to recover it. 00:27:22.562 [2024-11-20 15:36:26.285192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.562 [2024-11-20 15:36:26.285294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.562 [2024-11-20 15:36:26.285307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.562 [2024-11-20 15:36:26.285313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.562 [2024-11-20 15:36:26.285320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.285334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.295169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.295221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.295234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.295241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.295247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.295261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.305206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.305259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.305271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.305278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.305284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.305298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.315236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.315291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.315304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.315310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.315316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.315330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.325260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.325314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.325328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.325334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.325340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.325355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.335284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.335336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.335352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.335358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.335364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.335379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.345297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.345359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.345374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.345381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.345387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.345401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.355383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.355431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.355444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.355450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.355456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.355471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.365374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.365431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.365444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.365451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.365457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.365471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.375420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.375481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.375494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.375503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.375509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.375524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.385438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.385492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.385505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.385511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.385517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.385532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.395458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.395518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.395531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.563 [2024-11-20 15:36:26.395538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.563 [2024-11-20 15:36:26.395544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.563 [2024-11-20 15:36:26.395558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.563 qpair failed and we were unable to recover it. 00:27:22.563 [2024-11-20 15:36:26.405489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.563 [2024-11-20 15:36:26.405542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.563 [2024-11-20 15:36:26.405555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.405561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.405567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.405581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.415519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.415576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.415589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.415596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.415601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.415615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.425559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.425620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.425634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.425641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.425647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.425662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.435577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.435635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.435648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.435655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.435661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.435675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.445600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.445656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.445670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.445677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.445683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.445698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.455641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.455697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.455710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.455716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.455722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.455736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.564 [2024-11-20 15:36:26.465675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.564 [2024-11-20 15:36:26.465735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.564 [2024-11-20 15:36:26.465749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.564 [2024-11-20 15:36:26.465755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.564 [2024-11-20 15:36:26.465761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.564 [2024-11-20 15:36:26.465776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.564 qpair failed and we were unable to recover it. 00:27:22.823 [2024-11-20 15:36:26.475704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.823 [2024-11-20 15:36:26.475782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.823 [2024-11-20 15:36:26.475795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.823 [2024-11-20 15:36:26.475802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.823 [2024-11-20 15:36:26.475808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.823 [2024-11-20 15:36:26.475822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.823 qpair failed and we were unable to recover it. 00:27:22.823 [2024-11-20 15:36:26.485656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.823 [2024-11-20 15:36:26.485713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.823 [2024-11-20 15:36:26.485726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.823 [2024-11-20 15:36:26.485733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.823 [2024-11-20 15:36:26.485739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.823 [2024-11-20 15:36:26.485753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.823 qpair failed and we were unable to recover it. 00:27:22.823 [2024-11-20 15:36:26.495750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.823 [2024-11-20 15:36:26.495805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.823 [2024-11-20 15:36:26.495819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.823 [2024-11-20 15:36:26.495825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.823 [2024-11-20 15:36:26.495831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.823 [2024-11-20 15:36:26.495845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.505789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.505842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.505855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.505865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.505871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.505885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.515743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.515800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.515814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.515820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.515826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.515841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.525836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.525897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.525910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.525917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.525923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.525938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.535873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.535928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.535941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.535951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.535958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.535974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.545903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.545963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.545977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.545984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.545989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.546007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.555980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.556038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.556052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.556058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.556065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.556080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.565973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.566030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.566044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.566051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.566058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.566073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.576001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.576052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.576066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.576073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.576079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.576094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.586036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.586090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.586104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.586110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.586116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.586131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.596058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.596117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.596131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.596138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.596144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.596159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.606083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.824 [2024-11-20 15:36:26.606135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.824 [2024-11-20 15:36:26.606148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.824 [2024-11-20 15:36:26.606155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.824 [2024-11-20 15:36:26.606161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.824 [2024-11-20 15:36:26.606176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.824 qpair failed and we were unable to recover it. 00:27:22.824 [2024-11-20 15:36:26.616138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.616199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.616211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.616218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.616224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.616238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.626186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.626241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.626255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.626262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.626268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.626282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.636100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.636160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.636175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.636182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.636188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.636203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.646196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.646253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.646266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.646273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.646278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.646293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.656226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.656282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.656294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.656301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.656307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.656322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.666256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.666311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.666324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.666330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.666336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.666351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.676295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.676350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.676363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.676370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.676379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.676394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.686296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.686351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.686364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.686371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.686377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.686391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.696331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.696383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.696396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.696403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.696409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.696423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.706366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.706443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.825 [2024-11-20 15:36:26.706455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.825 [2024-11-20 15:36:26.706462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.825 [2024-11-20 15:36:26.706468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.825 [2024-11-20 15:36:26.706482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.825 qpair failed and we were unable to recover it. 00:27:22.825 [2024-11-20 15:36:26.716426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.825 [2024-11-20 15:36:26.716478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.826 [2024-11-20 15:36:26.716490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.826 [2024-11-20 15:36:26.716497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.826 [2024-11-20 15:36:26.716503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.826 [2024-11-20 15:36:26.716518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.826 qpair failed and we were unable to recover it. 00:27:22.826 [2024-11-20 15:36:26.726409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.826 [2024-11-20 15:36:26.726464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.826 [2024-11-20 15:36:26.726477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.826 [2024-11-20 15:36:26.726483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.826 [2024-11-20 15:36:26.726489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:22.826 [2024-11-20 15:36:26.726503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.826 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.736453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.736508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.736521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.736528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.736534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.736548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.746517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.746598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.746611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.746618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.746624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.746638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.756521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.756574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.756587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.756594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.756600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.756615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.766531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.766617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.766633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.766640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.766646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.766660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.776571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.776625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.776638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.776645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.776651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.776665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.786580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.786637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.786650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.786658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.786665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.786680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.796542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.796598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.796612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.796619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.796625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.796640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.806645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.806699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.806713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.806719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.806729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.806743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.816675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.816731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.816745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.816751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.816758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.816773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.826743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.826827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.826841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.826847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.826853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.826868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.836739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.836796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.836809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.836816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.836822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.836837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.846766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.846828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.846841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.846848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.846853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.846867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.856821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.856872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.856886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.856892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.856899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.856913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.866820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.866875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.866888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.866895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.084 [2024-11-20 15:36:26.866901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.084 [2024-11-20 15:36:26.866916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.084 qpair failed and we were unable to recover it. 00:27:23.084 [2024-11-20 15:36:26.876853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.084 [2024-11-20 15:36:26.876909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.084 [2024-11-20 15:36:26.876923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.084 [2024-11-20 15:36:26.876929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.876935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.876953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.886901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.886981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.886994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.887001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.887006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.887020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.896915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.896968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.896987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.896993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.896999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.897014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.906940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.907032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.907045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.907052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.907058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.907072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.916956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.917009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.917022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.917029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.917035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.917049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.926990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.927045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.927058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.927065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.927071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.927086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.937014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.937070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.937083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.937093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.937099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.937114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.947059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.947117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.947130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.947136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.947142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.947157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.957085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.957162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.957175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.957181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.957187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.957202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.967099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.967154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.967167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.967173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.967179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.967193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.977170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.977228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.977242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.977248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.977254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.977268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.085 [2024-11-20 15:36:26.987160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.085 [2024-11-20 15:36:26.987214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.085 [2024-11-20 15:36:26.987227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.085 [2024-11-20 15:36:26.987234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.085 [2024-11-20 15:36:26.987240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.085 [2024-11-20 15:36:26.987254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.085 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:26.997207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:26.997275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:26.997288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:26.997294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:26.997300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:26.997315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.007151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.007207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.007221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.007228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.007234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.007248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.017271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.017323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.017337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.017343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.017349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.017364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.027283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.027338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.027352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.027358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.027364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.027378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.037340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.037395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.037408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.037414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.037421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.037435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.047339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.047389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.047402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.047409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.047415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.047430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.057375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.057430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.057443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.344 [2024-11-20 15:36:27.057450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.344 [2024-11-20 15:36:27.057456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.344 [2024-11-20 15:36:27.057471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.344 qpair failed and we were unable to recover it. 00:27:23.344 [2024-11-20 15:36:27.067404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.344 [2024-11-20 15:36:27.067462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.344 [2024-11-20 15:36:27.067475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.067485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.067491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.067506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.077431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.077486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.077500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.077507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.077513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.077527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.087399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.087454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.087467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.087473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.087480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.087494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.097494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.097569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.097583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.097590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.097596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.097610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.107507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.107566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.107579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.107586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.107592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.107610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.117549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.117608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.117621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.117628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.117634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.117648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.127526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.127603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.127617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.127624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.127630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.127645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.137530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.137581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.137594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.137601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.137608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.137622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.147570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.147672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.147685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.147693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.147698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.147713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.157597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.157652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.157666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.157673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.157679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.157694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.167693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.167755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.345 [2024-11-20 15:36:27.167769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.345 [2024-11-20 15:36:27.167776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.345 [2024-11-20 15:36:27.167781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.345 [2024-11-20 15:36:27.167796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.345 qpair failed and we were unable to recover it. 00:27:23.345 [2024-11-20 15:36:27.177719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.345 [2024-11-20 15:36:27.177775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.177788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.177795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.177801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.177816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.187709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.187783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.187797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.187804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.187810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.187824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.197763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.197820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.197836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.197844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.197850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.197865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.207781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.207836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.207849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.207856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.207862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.207877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.217849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.217915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.217928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.217935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.217941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.217968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.227890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.228006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.228025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.228032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.228038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.228053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.237888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.237952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.237966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.237972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.237981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.237996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.346 [2024-11-20 15:36:27.247941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.346 [2024-11-20 15:36:27.248002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.346 [2024-11-20 15:36:27.248016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.346 [2024-11-20 15:36:27.248022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.346 [2024-11-20 15:36:27.248029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.346 [2024-11-20 15:36:27.248044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.346 qpair failed and we were unable to recover it. 00:27:23.609 [2024-11-20 15:36:27.257968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.609 [2024-11-20 15:36:27.258017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.609 [2024-11-20 15:36:27.258030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.609 [2024-11-20 15:36:27.258037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.609 [2024-11-20 15:36:27.258042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.609 [2024-11-20 15:36:27.258057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.609 qpair failed and we were unable to recover it. 00:27:23.609 [2024-11-20 15:36:27.267997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.609 [2024-11-20 15:36:27.268055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.609 [2024-11-20 15:36:27.268069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.609 [2024-11-20 15:36:27.268076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.609 [2024-11-20 15:36:27.268082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.268097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.278026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.278077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.278090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.278097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.278103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.278118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.288037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.288125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.288139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.288145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.288151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.288165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.298128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.298227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.298241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.298248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.298254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.298270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.308109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.308165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.308178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.308185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.308191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.308206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.318154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.318207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.318221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.318228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.318234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.318249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.328205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.328258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.328274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.328281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.328287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.328302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.338202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.338252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.338266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.338273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.338279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.338293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.348265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.348321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.348334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.348341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.348348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.348362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.358195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.358270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.358284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.358291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.358296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.358311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.368209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.368266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.368280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.368286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.368296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.368310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.378336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.610 [2024-11-20 15:36:27.378393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.610 [2024-11-20 15:36:27.378407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.610 [2024-11-20 15:36:27.378413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.610 [2024-11-20 15:36:27.378419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.610 [2024-11-20 15:36:27.378434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.610 qpair failed and we were unable to recover it. 00:27:23.610 [2024-11-20 15:36:27.388370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.388426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.388440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.388446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.388454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.388470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.398326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.398407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.398421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.398428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.398434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.398448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.408333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.408388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.408402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.408409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.408417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.408431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.418439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.418494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.418507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.418513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.418519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.418534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.428395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.428489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.428503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.428509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.428515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.428530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.438419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.438473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.438486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.438492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.438499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.438513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.448466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.448520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.448534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.448540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.448546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.448560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.458460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.458514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.458529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.458536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.458542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.458556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.468577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.468662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.468674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.468681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.468687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.468701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.478516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.478578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.478591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.478598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.478605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.478620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.488581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.488668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.488682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.611 [2024-11-20 15:36:27.488688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.611 [2024-11-20 15:36:27.488694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.611 [2024-11-20 15:36:27.488709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.611 qpair failed and we were unable to recover it. 00:27:23.611 [2024-11-20 15:36:27.498643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.611 [2024-11-20 15:36:27.498695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.611 [2024-11-20 15:36:27.498708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.612 [2024-11-20 15:36:27.498718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.612 [2024-11-20 15:36:27.498724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.612 [2024-11-20 15:36:27.498739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.612 qpair failed and we were unable to recover it. 00:27:23.612 [2024-11-20 15:36:27.508660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.612 [2024-11-20 15:36:27.508714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.612 [2024-11-20 15:36:27.508728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.612 [2024-11-20 15:36:27.508734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.612 [2024-11-20 15:36:27.508740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.612 [2024-11-20 15:36:27.508755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.612 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.518719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.518772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.518786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.518792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.518798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.518813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.528771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.528838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.528851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.528857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.528864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.528878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.538772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.538824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.538838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.538844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.538851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.538869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.548800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.548856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.548869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.548876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.548882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.548897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.558896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.559003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.559017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.559024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.559030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.559045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.568860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.568913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.568926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.568933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.568939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.568957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.578884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.578937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.578956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.578963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.578970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.578986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.588850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.588913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.923 [2024-11-20 15:36:27.588927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.923 [2024-11-20 15:36:27.588933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.923 [2024-11-20 15:36:27.588940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.923 [2024-11-20 15:36:27.588959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-11-20 15:36:27.598963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.923 [2024-11-20 15:36:27.599017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.599031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.599038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.599044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.599059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.608898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.608956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.608970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.608977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.608983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.608997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.618995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.619048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.619061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.619068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.619073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.619089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.629032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.629091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.629104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.629117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.629123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.629138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.639075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.639128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.639141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.639148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.639154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.639169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.649082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.649138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.649151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.649158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.649164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.649178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.659140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.659198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.659211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.659218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.659224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.659239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.669140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.669232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.669245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.669252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.669258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.669276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.679223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.679284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.679296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.679303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.679309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.679323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.689252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.689303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.689316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.689323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.689329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.689343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.699285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.699336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.699350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.699356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.699362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.699377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.709345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.709403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.709417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.709424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.709430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.709444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.719341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.924 [2024-11-20 15:36:27.719396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.924 [2024-11-20 15:36:27.719409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.924 [2024-11-20 15:36:27.719416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.924 [2024-11-20 15:36:27.719422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.924 [2024-11-20 15:36:27.719436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-11-20 15:36:27.729334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.729390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.729403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.729410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.729416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.729430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.739343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.739393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.739406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.739412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.739418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.739433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.749386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.749441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.749455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.749461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.749468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.749482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.759425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.759481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.759497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.759504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.759510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.759525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.769507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.769613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.769626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.769632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.769638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.769653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.779468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.779521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.779535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.779541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.779547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.779562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.789504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.789579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.789592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.789598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.789605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.789618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.799536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.799594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.799607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.799615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.799624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.799639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.809560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.809649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.809662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.809669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.809675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.809689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-11-20 15:36:27.819584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.925 [2024-11-20 15:36:27.819638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.925 [2024-11-20 15:36:27.819652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.925 [2024-11-20 15:36:27.819659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.925 [2024-11-20 15:36:27.819665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:23.925 [2024-11-20 15:36:27.819679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.925 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.829625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.829681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.829695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.829702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.829707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.829722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.839700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.839763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.839776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.839783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.839788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.839803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.849711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.849760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.849773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.849780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.849786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.849800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.859702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.859752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.859766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.859772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.859778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.859793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.869743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.869800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.869813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.869819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.869825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.869839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.879753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.879805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.879819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.879826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.879831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.186 [2024-11-20 15:36:27.879846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.186 qpair failed and we were unable to recover it. 00:27:24.186 [2024-11-20 15:36:27.889829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.186 [2024-11-20 15:36:27.889881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.186 [2024-11-20 15:36:27.889897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.186 [2024-11-20 15:36:27.889904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.186 [2024-11-20 15:36:27.889910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.889926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.899817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.899879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.899892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.899899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.899905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.899920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.909773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.909832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.909845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.909852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.909858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.909872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.919869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.919924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.919938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.919945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.919956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.919971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.929916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.929976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.929989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.929996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.930004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.930019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.939888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.939944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.939961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.939967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.939973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.939988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.949971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.950061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.950075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.950081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.950087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.950102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.959988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.960041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.960054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.960061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.960067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.960082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.970043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.970107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.970120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.970126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.970132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.970147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.979979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.980033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.980047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.980054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.980060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.980075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:27.990009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:27.990063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:27.990076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:27.990083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:27.990088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:27.990103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:28.000106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:28.000166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:28.000179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:28.000186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:28.000192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:28.000207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:28.010132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.187 [2024-11-20 15:36:28.010185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.187 [2024-11-20 15:36:28.010198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.187 [2024-11-20 15:36:28.010205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.187 [2024-11-20 15:36:28.010211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.187 [2024-11-20 15:36:28.010225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.187 qpair failed and we were unable to recover it. 00:27:24.187 [2024-11-20 15:36:28.020160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.020211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.020228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.020235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.020241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.020256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.030214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.030270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.030283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.030289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.030296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.030310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.040279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.040345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.040358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.040365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.040370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.040386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.050256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.050309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.050322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.050328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.050334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.050349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.060271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.060348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.060361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.060372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.060378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.060393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.070339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.070396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.070409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.070416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.070422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.070436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.188 [2024-11-20 15:36:28.080344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.188 [2024-11-20 15:36:28.080402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.188 [2024-11-20 15:36:28.080416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.188 [2024-11-20 15:36:28.080424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.188 [2024-11-20 15:36:28.080429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.188 [2024-11-20 15:36:28.080444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.188 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.090394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.090502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.090515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.090522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.090530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.090544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.100386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.100436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.100450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.100456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.100463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.100480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.110426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.110484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.110497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.110503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.110509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.110524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.120448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.120500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.120513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.120519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.120525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.120540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.130470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.130521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.130534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.130541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.130546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.130561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.140562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.140652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.140666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.140672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.140678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.140693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.150569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.150651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.150664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.150671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.150677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.150691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.160557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.160616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.160629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.160636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.160642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.160656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.170625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.170690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.170703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.170709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.170715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.170730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.180620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.180674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.180687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.180693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.180699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.180714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.190650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.190706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.190719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.190728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.190735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.449 [2024-11-20 15:36:28.190749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.449 qpair failed and we were unable to recover it. 00:27:24.449 [2024-11-20 15:36:28.200681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.449 [2024-11-20 15:36:28.200735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.449 [2024-11-20 15:36:28.200748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.449 [2024-11-20 15:36:28.200755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.449 [2024-11-20 15:36:28.200761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.200776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.210627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.210681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.210694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.210700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.210706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.210721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.220767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.220820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.220833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.220839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.220846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.220861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.230755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.230807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.230820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.230827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.230832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.230851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.240793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.240848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.240862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.240868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.240874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.240889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.250770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.250855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.250869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.250875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.250881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.250896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.260839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.260907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.260920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.260926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.260932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.260951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.270873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.270955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.270969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.270975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.270981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.270996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.280905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.280962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.280975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.280982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.280988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.281002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.290928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.290984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.290997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.291004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.291010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.291025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.300930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.301018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.301033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.301040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.301046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.301061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.310988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.311050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.311063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.311070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.311075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.450 [2024-11-20 15:36:28.311090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.450 qpair failed and we were unable to recover it. 00:27:24.450 [2024-11-20 15:36:28.321025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.450 [2024-11-20 15:36:28.321134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.450 [2024-11-20 15:36:28.321157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.450 [2024-11-20 15:36:28.321164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.450 [2024-11-20 15:36:28.321170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.451 [2024-11-20 15:36:28.321185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.451 qpair failed and we were unable to recover it. 00:27:24.451 [2024-11-20 15:36:28.331030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.451 [2024-11-20 15:36:28.331081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.451 [2024-11-20 15:36:28.331094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.451 [2024-11-20 15:36:28.331101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.451 [2024-11-20 15:36:28.331107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.451 [2024-11-20 15:36:28.331121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.451 qpair failed and we were unable to recover it. 00:27:24.451 [2024-11-20 15:36:28.340998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.451 [2024-11-20 15:36:28.341048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.451 [2024-11-20 15:36:28.341061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.451 [2024-11-20 15:36:28.341067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.451 [2024-11-20 15:36:28.341073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.451 [2024-11-20 15:36:28.341088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.451 qpair failed and we were unable to recover it. 00:27:24.451 [2024-11-20 15:36:28.351124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.451 [2024-11-20 15:36:28.351181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.451 [2024-11-20 15:36:28.351195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.451 [2024-11-20 15:36:28.351201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.451 [2024-11-20 15:36:28.351208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.451 [2024-11-20 15:36:28.351222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.451 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.361130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.361185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.361198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.361204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.361213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.361228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.371137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.371205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.371217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.371224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.371230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.371244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.381197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.381250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.381263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.381269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.381276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.381290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.391220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.391281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.391293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.391300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.391306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.391321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.401279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.401334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.401348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.401354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.401360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.401375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.411308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.411361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.411374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.411381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.411386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.411400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.711 [2024-11-20 15:36:28.421292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.711 [2024-11-20 15:36:28.421341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.711 [2024-11-20 15:36:28.421355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.711 [2024-11-20 15:36:28.421361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.711 [2024-11-20 15:36:28.421367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.711 [2024-11-20 15:36:28.421382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.711 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.431302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.712 [2024-11-20 15:36:28.431360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.712 [2024-11-20 15:36:28.431373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.712 [2024-11-20 15:36:28.431379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.712 [2024-11-20 15:36:28.431385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.712 [2024-11-20 15:36:28.431400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.441353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.712 [2024-11-20 15:36:28.441406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.712 [2024-11-20 15:36:28.441419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.712 [2024-11-20 15:36:28.441426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.712 [2024-11-20 15:36:28.441432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.712 [2024-11-20 15:36:28.441447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.451440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.712 [2024-11-20 15:36:28.451494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.712 [2024-11-20 15:36:28.451510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.712 [2024-11-20 15:36:28.451516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.712 [2024-11-20 15:36:28.451522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.712 [2024-11-20 15:36:28.451536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.461407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.712 [2024-11-20 15:36:28.461464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.712 [2024-11-20 15:36:28.461477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.712 [2024-11-20 15:36:28.461483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.712 [2024-11-20 15:36:28.461489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.712 [2024-11-20 15:36:28.461504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.471437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.712 [2024-11-20 15:36:28.471492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.712 [2024-11-20 15:36:28.471505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.712 [2024-11-20 15:36:28.471511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.712 [2024-11-20 15:36:28.471518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdef0000b90 00:27:24.712 [2024-11-20 15:36:28.471533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 [2024-11-20 15:36:28.471693] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:24.712 A controller has encountered a failure and is being reset. 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 qpair failed and we were unable to recover it. 00:27:24.712 Controller properly reset. 00:27:24.712 Initializing NVMe Controllers 00:27:24.712 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:24.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:24.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:24.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:24.712 Initialization complete. Launching workers. 00:27:24.712 Starting thread on core 1 00:27:24.712 Starting thread on core 2 00:27:24.712 Starting thread on core 3 00:27:24.712 Starting thread on core 0 00:27:24.712 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:24.712 00:27:24.712 real 0m10.772s 00:27:24.712 user 0m19.548s 00:27:24.712 sys 0m4.635s 00:27:24.712 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.972 ************************************ 00:27:24.972 END TEST nvmf_target_disconnect_tc2 00:27:24.972 ************************************ 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.972 rmmod nvme_tcp 00:27:24.972 rmmod nvme_fabrics 00:27:24.972 rmmod nvme_keyring 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2323229 ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2323229 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2323229 ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2323229 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2323229 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2323229' 00:27:24.972 killing process with pid 2323229 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2323229 00:27:24.972 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2323229 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.231 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.232 15:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.134 15:36:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.134 00:27:27.134 real 0m19.509s 00:27:27.134 user 0m47.136s 00:27:27.134 sys 0m9.495s 00:27:27.134 15:36:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.134 15:36:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.134 ************************************ 00:27:27.134 END TEST nvmf_target_disconnect 00:27:27.134 ************************************ 00:27:27.392 15:36:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:27.392 00:27:27.392 real 5m52.730s 00:27:27.392 user 10m33.305s 00:27:27.392 sys 1m58.788s 00:27:27.392 15:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.392 15:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.392 ************************************ 00:27:27.392 END TEST nvmf_host 00:27:27.393 ************************************ 00:27:27.393 15:36:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:27.393 15:36:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:27.393 15:36:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:27.393 15:36:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.393 15:36:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.393 15:36:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.393 ************************************ 00:27:27.393 START TEST nvmf_target_core_interrupt_mode 00:27:27.393 ************************************ 00:27:27.393 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:27.393 * Looking for test storage... 00:27:27.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:27.393 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.393 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.393 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.652 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.653 --rc genhtml_branch_coverage=1 00:27:27.653 --rc genhtml_function_coverage=1 00:27:27.653 --rc genhtml_legend=1 00:27:27.653 --rc geninfo_all_blocks=1 00:27:27.653 --rc geninfo_unexecuted_blocks=1 00:27:27.653 00:27:27.653 ' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.653 --rc genhtml_branch_coverage=1 00:27:27.653 --rc genhtml_function_coverage=1 00:27:27.653 --rc genhtml_legend=1 00:27:27.653 --rc geninfo_all_blocks=1 00:27:27.653 --rc geninfo_unexecuted_blocks=1 00:27:27.653 00:27:27.653 ' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.653 --rc genhtml_branch_coverage=1 00:27:27.653 --rc genhtml_function_coverage=1 00:27:27.653 --rc genhtml_legend=1 00:27:27.653 --rc geninfo_all_blocks=1 00:27:27.653 --rc geninfo_unexecuted_blocks=1 00:27:27.653 00:27:27.653 ' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.653 --rc genhtml_branch_coverage=1 00:27:27.653 --rc genhtml_function_coverage=1 00:27:27.653 --rc genhtml_legend=1 00:27:27.653 --rc geninfo_all_blocks=1 00:27:27.653 --rc geninfo_unexecuted_blocks=1 00:27:27.653 00:27:27.653 ' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.653 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:27.654 ************************************ 00:27:27.654 START TEST nvmf_abort 00:27:27.654 ************************************ 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:27.654 * Looking for test storage... 00:27:27.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.654 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.913 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.914 --rc genhtml_branch_coverage=1 00:27:27.914 --rc genhtml_function_coverage=1 00:27:27.914 --rc genhtml_legend=1 00:27:27.914 --rc geninfo_all_blocks=1 00:27:27.914 --rc geninfo_unexecuted_blocks=1 00:27:27.914 00:27:27.914 ' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.914 --rc genhtml_branch_coverage=1 00:27:27.914 --rc genhtml_function_coverage=1 00:27:27.914 --rc genhtml_legend=1 00:27:27.914 --rc geninfo_all_blocks=1 00:27:27.914 --rc geninfo_unexecuted_blocks=1 00:27:27.914 00:27:27.914 ' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.914 --rc genhtml_branch_coverage=1 00:27:27.914 --rc genhtml_function_coverage=1 00:27:27.914 --rc genhtml_legend=1 00:27:27.914 --rc geninfo_all_blocks=1 00:27:27.914 --rc geninfo_unexecuted_blocks=1 00:27:27.914 00:27:27.914 ' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.914 --rc genhtml_branch_coverage=1 00:27:27.914 --rc genhtml_function_coverage=1 00:27:27.914 --rc genhtml_legend=1 00:27:27.914 --rc geninfo_all_blocks=1 00:27:27.914 --rc geninfo_unexecuted_blocks=1 00:27:27.914 00:27:27.914 ' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.914 15:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:34.484 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.484 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:34.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:34.485 Found net devices under 0000:86:00.0: cvl_0_0 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:34.485 Found net devices under 0000:86:00.1: cvl_0_1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:27:34.485 00:27:34.485 --- 10.0.0.2 ping statistics --- 00:27:34.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.485 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:27:34.485 00:27:34.485 --- 10.0.0.1 ping statistics --- 00:27:34.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.485 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2327899 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2327899 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2327899 ']' 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.485 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.485 [2024-11-20 15:36:37.555421] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:34.485 [2024-11-20 15:36:37.556468] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:34.485 [2024-11-20 15:36:37.556509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.485 [2024-11-20 15:36:37.635944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.485 [2024-11-20 15:36:37.678531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.485 [2024-11-20 15:36:37.678568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.485 [2024-11-20 15:36:37.678576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.485 [2024-11-20 15:36:37.678582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.486 [2024-11-20 15:36:37.678587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.486 [2024-11-20 15:36:37.679926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.486 [2024-11-20 15:36:37.679959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.486 [2024-11-20 15:36:37.679961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.486 [2024-11-20 15:36:37.747116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:34.486 [2024-11-20 15:36:37.747670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:34.486 [2024-11-20 15:36:37.747793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:34.486 [2024-11-20 15:36:37.747972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 [2024-11-20 15:36:37.824765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 Malloc0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 Delay0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 [2024-11-20 15:36:37.916759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.486 15:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:34.486 [2024-11-20 15:36:38.046773] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:36.392 Initializing NVMe Controllers 00:27:36.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:36.392 controller IO queue size 128 less than required 00:27:36.392 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:36.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:36.392 Initialization complete. Launching workers. 00:27:36.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37045 00:27:36.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37102, failed to submit 66 00:27:36.392 success 37045, unsuccessful 57, failed 0 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.392 rmmod nvme_tcp 00:27:36.392 rmmod nvme_fabrics 00:27:36.392 rmmod nvme_keyring 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2327899 ']' 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2327899 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2327899 ']' 00:27:36.392 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2327899 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327899 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327899' 00:27:36.393 killing process with pid 2327899 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2327899 00:27:36.393 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2327899 00:27:36.650 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.650 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.650 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.650 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.651 15:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.555 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.555 00:27:38.555 real 0m11.067s 00:27:38.555 user 0m10.158s 00:27:38.555 sys 0m5.691s 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.814 ************************************ 00:27:38.814 END TEST nvmf_abort 00:27:38.814 ************************************ 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.814 ************************************ 00:27:38.814 START TEST nvmf_ns_hotplug_stress 00:27:38.814 ************************************ 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:38.814 * Looking for test storage... 00:27:38.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:38.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.814 --rc genhtml_branch_coverage=1 00:27:38.814 --rc genhtml_function_coverage=1 00:27:38.814 --rc genhtml_legend=1 00:27:38.814 --rc geninfo_all_blocks=1 00:27:38.814 --rc geninfo_unexecuted_blocks=1 00:27:38.814 00:27:38.814 ' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:38.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.814 --rc genhtml_branch_coverage=1 00:27:38.814 --rc genhtml_function_coverage=1 00:27:38.814 --rc genhtml_legend=1 00:27:38.814 --rc geninfo_all_blocks=1 00:27:38.814 --rc geninfo_unexecuted_blocks=1 00:27:38.814 00:27:38.814 ' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:38.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.814 --rc genhtml_branch_coverage=1 00:27:38.814 --rc genhtml_function_coverage=1 00:27:38.814 --rc genhtml_legend=1 00:27:38.814 --rc geninfo_all_blocks=1 00:27:38.814 --rc geninfo_unexecuted_blocks=1 00:27:38.814 00:27:38.814 ' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:38.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.814 --rc genhtml_branch_coverage=1 00:27:38.814 --rc genhtml_function_coverage=1 00:27:38.814 --rc genhtml_legend=1 00:27:38.814 --rc geninfo_all_blocks=1 00:27:38.814 --rc geninfo_unexecuted_blocks=1 00:27:38.814 00:27:38.814 ' 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.814 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.815 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.815 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.815 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.073 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.074 15:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.647 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.648 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.648 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:27:45.648 00:27:45.648 --- 10.0.0.2 ping statistics --- 00:27:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.648 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:45.648 00:27:45.648 --- 10.0.0.1 ping statistics --- 00:27:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.648 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.648 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2331760 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2331760 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2331760 ']' 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.649 15:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.649 [2024-11-20 15:36:48.657526] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.649 [2024-11-20 15:36:48.658528] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:27:45.649 [2024-11-20 15:36:48.658568] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.649 [2024-11-20 15:36:48.741998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:45.649 [2024-11-20 15:36:48.784041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.649 [2024-11-20 15:36:48.784082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.649 [2024-11-20 15:36:48.784090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.649 [2024-11-20 15:36:48.784097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.649 [2024-11-20 15:36:48.784102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.649 [2024-11-20 15:36:48.785543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.649 [2024-11-20 15:36:48.785582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.649 [2024-11-20 15:36:48.785583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.649 [2024-11-20 15:36:48.853030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:45.649 [2024-11-20 15:36:48.853850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:45.649 [2024-11-20 15:36:48.853980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:45.649 [2024-11-20 15:36:48.854160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:45.649 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.909 [2024-11-20 15:36:49.702418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.909 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:46.167 15:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.426 [2024-11-20 15:36:50.110854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.426 15:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.685 15:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:46.685 Malloc0 00:27:46.685 15:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:46.944 Delay0 00:27:46.944 15:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.203 15:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:47.462 NULL1 00:27:47.462 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:47.462 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:47.462 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2332248 00:27:47.462 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:47.462 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.720 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.979 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:47.979 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:48.238 true 00:27:48.238 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:48.238 15:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.238 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.498 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:48.498 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:48.949 true 00:27:48.949 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:48.949 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.949 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.208 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:49.208 15:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:49.467 true 00:27:49.467 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:49.467 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.726 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.985 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:49.985 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:49.985 true 00:27:49.985 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:49.985 15:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.243 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.502 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:50.502 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:50.760 true 00:27:50.760 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:50.760 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.018 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.276 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:51.276 15:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:51.276 true 00:27:51.276 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:51.276 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.534 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.792 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:51.792 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:52.051 true 00:27:52.051 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:52.051 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.309 15:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.567 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:52.567 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:52.567 true 00:27:52.567 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:52.567 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.825 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.084 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:53.084 15:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:53.342 true 00:27:53.342 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:53.342 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.601 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.860 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:53.860 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:53.860 true 00:27:53.860 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:53.860 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.118 15:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.376 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:54.376 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:54.635 true 00:27:54.635 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:54.635 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.893 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.152 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:55.152 15:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:55.152 true 00:27:55.152 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:55.152 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.416 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.674 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:55.674 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:55.932 true 00:27:55.932 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:55.932 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.192 15:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.451 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:56.451 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:56.451 true 00:27:56.451 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:56.451 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.710 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.970 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:56.970 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:57.229 true 00:27:57.229 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:57.230 15:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.489 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.748 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:57.748 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:57.748 true 00:27:57.748 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:57.748 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.007 15:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.266 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:58.266 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:58.525 true 00:27:58.525 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:58.525 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.784 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.043 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:59.043 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:59.043 true 00:27:59.043 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:59.043 15:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.303 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.562 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:59.562 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:59.820 true 00:27:59.821 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:27:59.821 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.080 15:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.340 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:00.340 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:00.340 true 00:28:00.599 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:00.599 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.599 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.858 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:00.858 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:01.116 true 00:28:01.116 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:01.116 15:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.374 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.633 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:01.633 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:01.892 true 00:28:01.892 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:01.892 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.892 15:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.152 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:02.152 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:02.410 true 00:28:02.410 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:02.410 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.669 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.927 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:02.927 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:03.186 true 00:28:03.186 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:03.186 15:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.444 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.444 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:03.444 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:03.703 true 00:28:03.703 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:03.703 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.962 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.221 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:04.221 15:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:04.479 true 00:28:04.479 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:04.479 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.740 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.740 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:04.740 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:04.999 true 00:28:04.999 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:04.999 15:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.258 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.521 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:05.521 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:05.779 true 00:28:05.779 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:05.779 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.038 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.297 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:06.297 15:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:06.297 true 00:28:06.297 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:06.297 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.555 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.814 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:06.814 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:28:07.072 true 00:28:07.072 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:07.072 15:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.332 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.590 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:28:07.590 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:28:07.590 true 00:28:07.849 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:07.849 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.849 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.108 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:28:08.108 15:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:28:08.367 true 00:28:08.367 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:08.367 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.625 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.884 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:28:08.884 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:28:09.142 true 00:28:09.142 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:09.142 15:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.143 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.401 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:28:09.401 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:28:09.660 true 00:28:09.660 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:09.660 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.918 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.177 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:28:10.177 15:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:28:10.435 true 00:28:10.435 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:10.435 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.435 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.694 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:28:10.694 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:28:10.953 true 00:28:10.953 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:10.953 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.212 15:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.470 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:28:11.470 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:28:11.729 true 00:28:11.729 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:11.729 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.988 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.988 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:28:11.988 15:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:28:12.246 true 00:28:12.246 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:12.246 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.505 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.764 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:28:12.764 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:28:13.022 true 00:28:13.022 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:13.022 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.281 15:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.281 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:28:13.281 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:28:13.539 true 00:28:13.539 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:13.539 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.799 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.057 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:28:14.057 15:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:28:14.315 true 00:28:14.315 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:14.315 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.574 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.574 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:28:14.574 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:28:14.833 true 00:28:14.833 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:14.833 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.091 15:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.349 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:28:15.349 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:28:15.608 true 00:28:15.608 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:15.608 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.867 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.867 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:28:15.867 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:28:16.127 true 00:28:16.127 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:16.127 15:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.385 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.645 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:28:16.645 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:28:16.903 true 00:28:16.903 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:16.903 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.162 15:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.162 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:28:17.162 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:28:17.421 true 00:28:17.421 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:17.421 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.680 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.939 Initializing NVMe Controllers 00:28:17.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.939 Controller IO queue size 128, less than required. 00:28:17.939 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:17.939 Initialization complete. Launching workers. 00:28:17.939 ======================================================== 00:28:17.939 Latency(us) 00:28:17.939 Device Information : IOPS MiB/s Average min max 00:28:17.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27266.40 13.31 4694.33 1593.56 9206.74 00:28:17.939 ======================================================== 00:28:17.939 Total : 27266.40 13.31 4694.33 1593.56 9206.74 00:28:17.939 00:28:17.939 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:28:17.939 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:28:18.197 true 00:28:18.197 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2332248 00:28:18.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2332248) - No such process 00:28:18.197 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2332248 00:28:18.197 15:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.197 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.456 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:18.456 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:18.456 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:18.456 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.456 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:18.715 null0 00:28:18.715 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.715 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.715 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:18.974 null1 00:28:18.974 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.974 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.974 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:18.974 null2 00:28:19.232 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.232 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.232 15:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:19.232 null3 00:28:19.232 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.232 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.232 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:19.491 null4 00:28:19.491 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.491 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.491 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:19.750 null5 00:28:19.750 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:19.750 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:19.750 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:20.009 null6 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:20.009 null7 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.009 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2337648 2337650 2337651 2337653 2337655 2337657 2337659 2337661 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.010 15:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.269 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.528 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.786 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.786 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.787 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.046 15:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.305 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.563 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.563 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.563 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.563 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.563 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.564 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.564 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.564 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.822 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.081 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.082 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.340 15:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.340 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.599 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.857 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.117 15:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.117 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.376 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:23.634 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.893 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.894 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:24.153 15:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.153 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:24.412 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.413 rmmod nvme_tcp 00:28:24.413 rmmod nvme_fabrics 00:28:24.413 rmmod nvme_keyring 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2331760 ']' 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2331760 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2331760 ']' 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2331760 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331760 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331760' 00:28:24.413 killing process with pid 2331760 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2331760 00:28:24.413 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2331760 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.672 15:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.575 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.575 00:28:26.575 real 0m47.917s 00:28:26.575 user 3m2.398s 00:28:26.575 sys 0m21.841s 00:28:26.575 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.575 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:26.575 ************************************ 00:28:26.575 END TEST nvmf_ns_hotplug_stress 00:28:26.575 ************************************ 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:26.835 ************************************ 00:28:26.835 START TEST nvmf_delete_subsystem 00:28:26.835 ************************************ 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:26.835 * Looking for test storage... 00:28:26.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.835 --rc genhtml_branch_coverage=1 00:28:26.835 --rc genhtml_function_coverage=1 00:28:26.835 --rc genhtml_legend=1 00:28:26.835 --rc geninfo_all_blocks=1 00:28:26.835 --rc geninfo_unexecuted_blocks=1 00:28:26.835 00:28:26.835 ' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.835 --rc genhtml_branch_coverage=1 00:28:26.835 --rc genhtml_function_coverage=1 00:28:26.835 --rc genhtml_legend=1 00:28:26.835 --rc geninfo_all_blocks=1 00:28:26.835 --rc geninfo_unexecuted_blocks=1 00:28:26.835 00:28:26.835 ' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.835 --rc genhtml_branch_coverage=1 00:28:26.835 --rc genhtml_function_coverage=1 00:28:26.835 --rc genhtml_legend=1 00:28:26.835 --rc geninfo_all_blocks=1 00:28:26.835 --rc geninfo_unexecuted_blocks=1 00:28:26.835 00:28:26.835 ' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.835 --rc genhtml_branch_coverage=1 00:28:26.835 --rc genhtml_function_coverage=1 00:28:26.835 --rc genhtml_legend=1 00:28:26.835 --rc geninfo_all_blocks=1 00:28:26.835 --rc geninfo_unexecuted_blocks=1 00:28:26.835 00:28:26.835 ' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.835 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.836 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.095 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.095 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.095 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.095 15:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.674 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.675 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.675 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.675 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.675 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:28:33.675 00:28:33.675 --- 10.0.0.2 ping statistics --- 00:28:33.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.675 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:33.675 00:28:33.675 --- 10.0.0.1 ping statistics --- 00:28:33.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.675 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2342019 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2342019 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2342019 ']' 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.675 15:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.675 [2024-11-20 15:37:36.745155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.675 [2024-11-20 15:37:36.746124] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:33.675 [2024-11-20 15:37:36.746178] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.675 [2024-11-20 15:37:36.826222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:33.675 [2024-11-20 15:37:36.865494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.675 [2024-11-20 15:37:36.865530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.675 [2024-11-20 15:37:36.865538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.675 [2024-11-20 15:37:36.865544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.675 [2024-11-20 15:37:36.865551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.675 [2024-11-20 15:37:36.866777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.675 [2024-11-20 15:37:36.866778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.675 [2024-11-20 15:37:36.934295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:33.675 [2024-11-20 15:37:36.934894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:33.675 [2024-11-20 15:37:36.935123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.939 [2024-11-20 15:37:37.627574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.939 [2024-11-20 15:37:37.651839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:33.939 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.940 NULL1 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.940 Delay0 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2342180 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:33.940 15:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:33.940 [2024-11-20 15:37:37.763659] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:35.842 15:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.842 15:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.842 15:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 starting I/O failed: -6 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Read completed with error (sct=0, sc=8) 00:28:36.102 Write completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 [2024-11-20 15:37:39.879263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32860 is same with the state(6) to be set 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 [2024-11-20 15:37:39.880161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c324a0 is same with the state(6) to be set 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 [2024-11-20 15:37:39.880352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c322c0 is same with the state(6) to be set 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Write completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 Read completed with error (sct=0, sc=8) 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:36.103 starting I/O failed: -6 00:28:37.040 [2024-11-20 15:37:40.857908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c339a0 is same with the state(6) to be set 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Write completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Write completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Write completed with error (sct=0, sc=8) 00:28:37.040 Write completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.040 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 [2024-11-20 15:37:40.882879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee800d020 is same with the state(6) to be set 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 [2024-11-20 15:37:40.882980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32680 is same with the state(6) to be set 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 [2024-11-20 15:37:40.883174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee8000c40 is same with the state(6) to be set 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Write completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 Read completed with error (sct=0, sc=8) 00:28:37.041 [2024-11-20 15:37:40.884224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee800d800 is same with the state(6) to be set 00:28:37.041 Initializing NVMe Controllers 00:28:37.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.041 Controller IO queue size 128, less than required. 00:28:37.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:37.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:37.041 Initialization complete. Launching workers. 00:28:37.041 ======================================================== 00:28:37.041 Latency(us) 00:28:37.041 Device Information : IOPS MiB/s Average min max 00:28:37.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 149.70 0.07 898015.18 899.54 1008253.62 00:28:37.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.57 0.08 1052362.88 317.10 2000656.67 00:28:37.041 ======================================================== 00:28:37.041 Total : 323.27 0.16 980888.02 317.10 2000656.67 00:28:37.041 00:28:37.041 [2024-11-20 15:37:40.884877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c339a0 (9): Bad file descriptor 00:28:37.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:37.041 15:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.041 15:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:37.041 15:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342180 00:28:37.041 15:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342180 00:28:37.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2342180) - No such process 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2342180 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2342180 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2342180 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.609 [2024-11-20 15:37:41.419742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2342735 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:37.609 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:37.609 [2024-11-20 15:37:41.503515] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:38.177 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.177 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:38.177 15:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.743 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.743 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:38.743 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.310 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.310 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:39.310 15:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:39.568 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:39.568 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:39.568 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.135 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:40.135 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:40.135 15:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.703 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:40.703 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:40.703 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:40.963 Initializing NVMe Controllers 00:28:40.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.963 Controller IO queue size 128, less than required. 00:28:40.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:40.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:40.963 Initialization complete. Launching workers. 00:28:40.963 ======================================================== 00:28:40.963 Latency(us) 00:28:40.963 Device Information : IOPS MiB/s Average min max 00:28:40.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001957.93 1000177.17 1005818.07 00:28:40.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005782.28 1000279.86 1042092.97 00:28:40.963 ======================================================== 00:28:40.963 Total : 256.00 0.12 1003870.10 1000177.17 1042092.97 00:28:40.963 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2342735 00:28:41.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2342735) - No such process 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2342735 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.222 15:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.222 rmmod nvme_tcp 00:28:41.222 rmmod nvme_fabrics 00:28:41.222 rmmod nvme_keyring 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2342019 ']' 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2342019 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2342019 ']' 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2342019 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342019 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342019' 00:28:41.222 killing process with pid 2342019 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2342019 00:28:41.222 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2342019 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.481 15:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.018 00:28:44.018 real 0m16.807s 00:28:44.018 user 0m26.390s 00:28:44.018 sys 0m6.075s 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:44.018 ************************************ 00:28:44.018 END TEST nvmf_delete_subsystem 00:28:44.018 ************************************ 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:44.018 ************************************ 00:28:44.018 START TEST nvmf_host_management 00:28:44.018 ************************************ 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:44.018 * Looking for test storage... 00:28:44.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.018 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:44.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.019 --rc genhtml_branch_coverage=1 00:28:44.019 --rc genhtml_function_coverage=1 00:28:44.019 --rc genhtml_legend=1 00:28:44.019 --rc geninfo_all_blocks=1 00:28:44.019 --rc geninfo_unexecuted_blocks=1 00:28:44.019 00:28:44.019 ' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:44.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.019 --rc genhtml_branch_coverage=1 00:28:44.019 --rc genhtml_function_coverage=1 00:28:44.019 --rc genhtml_legend=1 00:28:44.019 --rc geninfo_all_blocks=1 00:28:44.019 --rc geninfo_unexecuted_blocks=1 00:28:44.019 00:28:44.019 ' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:44.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.019 --rc genhtml_branch_coverage=1 00:28:44.019 --rc genhtml_function_coverage=1 00:28:44.019 --rc genhtml_legend=1 00:28:44.019 --rc geninfo_all_blocks=1 00:28:44.019 --rc geninfo_unexecuted_blocks=1 00:28:44.019 00:28:44.019 ' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:44.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.019 --rc genhtml_branch_coverage=1 00:28:44.019 --rc genhtml_function_coverage=1 00:28:44.019 --rc genhtml_legend=1 00:28:44.019 --rc geninfo_all_blocks=1 00:28:44.019 --rc geninfo_unexecuted_blocks=1 00:28:44.019 00:28:44.019 ' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.019 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.020 15:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:50.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:50.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.591 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:50.592 Found net devices under 0000:86:00.0: cvl_0_0 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:50.592 Found net devices under 0000:86:00.1: cvl_0_1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:28:50.592 00:28:50.592 --- 10.0.0.2 ping statistics --- 00:28:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.592 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:28:50.592 00:28:50.592 --- 10.0.0.1 ping statistics --- 00:28:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.592 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2346807 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2346807 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2346807 ']' 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.592 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.592 [2024-11-20 15:37:53.561748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:50.592 [2024-11-20 15:37:53.562704] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:50.592 [2024-11-20 15:37:53.562740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.592 [2024-11-20 15:37:53.644066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.592 [2024-11-20 15:37:53.685930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.592 [2024-11-20 15:37:53.685978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.592 [2024-11-20 15:37:53.685986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.592 [2024-11-20 15:37:53.685992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.593 [2024-11-20 15:37:53.685997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.593 [2024-11-20 15:37:53.687655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.593 [2024-11-20 15:37:53.687786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.593 [2024-11-20 15:37:53.687897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.593 [2024-11-20 15:37:53.687898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.593 [2024-11-20 15:37:53.755837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.593 [2024-11-20 15:37:53.756354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:50.593 [2024-11-20 15:37:53.756685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:50.593 [2024-11-20 15:37:53.757002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.593 [2024-11-20 15:37:53.757044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 [2024-11-20 15:37:53.832593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 Malloc0 00:28:50.593 [2024-11-20 15:37:53.920882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2346984 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2346984 /var/tmp/bdevperf.sock 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2346984 ']' 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.593 { 00:28:50.593 "params": { 00:28:50.593 "name": "Nvme$subsystem", 00:28:50.593 "trtype": "$TEST_TRANSPORT", 00:28:50.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.593 "adrfam": "ipv4", 00:28:50.593 "trsvcid": "$NVMF_PORT", 00:28:50.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.593 "hdgst": ${hdgst:-false}, 00:28:50.593 "ddgst": ${ddgst:-false} 00:28:50.593 }, 00:28:50.593 "method": "bdev_nvme_attach_controller" 00:28:50.593 } 00:28:50.593 EOF 00:28:50.593 )") 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:50.593 15:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.593 "params": { 00:28:50.593 "name": "Nvme0", 00:28:50.593 "trtype": "tcp", 00:28:50.593 "traddr": "10.0.0.2", 00:28:50.593 "adrfam": "ipv4", 00:28:50.593 "trsvcid": "4420", 00:28:50.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.593 "hdgst": false, 00:28:50.593 "ddgst": false 00:28:50.593 }, 00:28:50.593 "method": "bdev_nvme_attach_controller" 00:28:50.593 }' 00:28:50.593 [2024-11-20 15:37:54.017192] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:50.593 [2024-11-20 15:37:54.017239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346984 ] 00:28:50.593 [2024-11-20 15:37:54.091834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.593 [2024-11-20 15:37:54.132924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.593 Running I/O for 10 seconds... 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=107 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 107 -ge 100 ']' 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:50.593 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.594 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.594 [2024-11-20 15:37:54.432325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 [2024-11-20 15:37:54.432361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 [2024-11-20 15:37:54.432370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 [2024-11-20 15:37:54.432376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 [2024-11-20 15:37:54.432382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 [2024-11-20 15:37:54.432388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaec0 is same with the state(6) to be set 00:28:50.594 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.594 [2024-11-20 15:37:54.437197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:50.594 [2024-11-20 15:37:54.437549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.594 [2024-11-20 15:37:54.437744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.594 [2024-11-20 15:37:54.437750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.595 [2024-11-20 15:37:54.437861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.437986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.437996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:50.595 [2024-11-20 15:37:54.438157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.595 [2024-11-20 15:37:54.438198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.595 [2024-11-20 15:37:54.438305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.595 [2024-11-20 15:37:54.438319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.595 [2024-11-20 15:37:54.438332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:50.595 [2024-11-20 15:37:54.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.595 [2024-11-20 15:37:54.438352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68500 is same with the state(6) to be set 00:28:50.595 [2024-11-20 15:37:54.439239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:50.595 task offset: 24576 on job bdev=Nvme0n1 fails 00:28:50.595 00:28:50.595 Latency(us) 00:28:50.595 [2024-11-20T14:37:54.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.595 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:50.595 Job: Nvme0n1 ended in about 0.11 seconds with error 00:28:50.595 Verification LBA range: start 0x0 length 0x400 00:28:50.595 Nvme0n1 : 0.11 1748.86 109.30 582.95 0.00 25259.56 1709.63 27354.16 00:28:50.595 [2024-11-20T14:37:54.503Z] =================================================================================================================== 00:28:50.595 [2024-11-20T14:37:54.503Z] Total : 1748.86 109.30 582.95 0.00 25259.56 1709.63 27354.16 00:28:50.595 [2024-11-20 15:37:54.441622] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:50.596 [2024-11-20 15:37:54.441643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd68500 (9): Bad file descriptor 00:28:50.596 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.596 15:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:50.855 [2024-11-20 15:37:54.535171] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2346984 00:28:51.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2346984) - No such process 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.791 { 00:28:51.791 "params": { 00:28:51.791 "name": "Nvme$subsystem", 00:28:51.791 "trtype": "$TEST_TRANSPORT", 00:28:51.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.791 "adrfam": "ipv4", 00:28:51.791 "trsvcid": "$NVMF_PORT", 00:28:51.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.791 "hdgst": ${hdgst:-false}, 00:28:51.791 "ddgst": ${ddgst:-false} 00:28:51.791 }, 00:28:51.791 "method": "bdev_nvme_attach_controller" 00:28:51.791 } 00:28:51.791 EOF 00:28:51.791 )") 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:51.791 15:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.791 "params": { 00:28:51.791 "name": "Nvme0", 00:28:51.791 "trtype": "tcp", 00:28:51.791 "traddr": "10.0.0.2", 00:28:51.791 "adrfam": "ipv4", 00:28:51.791 "trsvcid": "4420", 00:28:51.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.791 "hdgst": false, 00:28:51.791 "ddgst": false 00:28:51.791 }, 00:28:51.791 "method": "bdev_nvme_attach_controller" 00:28:51.791 }' 00:28:51.791 [2024-11-20 15:37:55.500556] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:28:51.791 [2024-11-20 15:37:55.500604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347225 ] 00:28:51.791 [2024-11-20 15:37:55.576467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.791 [2024-11-20 15:37:55.616202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.050 Running I/O for 1 seconds... 00:28:52.986 1948.00 IOPS, 121.75 MiB/s 00:28:52.986 Latency(us) 00:28:52.986 [2024-11-20T14:37:56.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.986 Verification LBA range: start 0x0 length 0x400 00:28:52.986 Nvme0n1 : 1.01 1991.96 124.50 0.00 0.00 31516.82 1773.75 28151.99 00:28:52.986 [2024-11-20T14:37:56.894Z] =================================================================================================================== 00:28:52.986 [2024-11-20T14:37:56.894Z] Total : 1991.96 124.50 0.00 0.00 31516.82 1773.75 28151.99 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.246 15:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.246 rmmod nvme_tcp 00:28:53.246 rmmod nvme_fabrics 00:28:53.246 rmmod nvme_keyring 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2346807 ']' 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2346807 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2346807 ']' 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2346807 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2346807 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2346807' 00:28:53.246 killing process with pid 2346807 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2346807 00:28:53.246 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2346807 00:28:53.506 [2024-11-20 15:37:57.233173] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.506 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.507 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.507 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.507 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.507 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.507 15:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:56.043 00:28:56.043 real 0m11.924s 00:28:56.043 user 0m16.058s 00:28:56.043 sys 0m6.182s 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.043 ************************************ 00:28:56.043 END TEST nvmf_host_management 00:28:56.043 ************************************ 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:56.043 ************************************ 00:28:56.043 START TEST nvmf_lvol 00:28:56.043 ************************************ 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:56.043 * Looking for test storage... 00:28:56.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.043 --rc genhtml_branch_coverage=1 00:28:56.043 --rc genhtml_function_coverage=1 00:28:56.043 --rc genhtml_legend=1 00:28:56.043 --rc geninfo_all_blocks=1 00:28:56.043 --rc geninfo_unexecuted_blocks=1 00:28:56.043 00:28:56.043 ' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.043 --rc genhtml_branch_coverage=1 00:28:56.043 --rc genhtml_function_coverage=1 00:28:56.043 --rc genhtml_legend=1 00:28:56.043 --rc geninfo_all_blocks=1 00:28:56.043 --rc geninfo_unexecuted_blocks=1 00:28:56.043 00:28:56.043 ' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.043 --rc genhtml_branch_coverage=1 00:28:56.043 --rc genhtml_function_coverage=1 00:28:56.043 --rc genhtml_legend=1 00:28:56.043 --rc geninfo_all_blocks=1 00:28:56.043 --rc geninfo_unexecuted_blocks=1 00:28:56.043 00:28:56.043 ' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.043 --rc genhtml_branch_coverage=1 00:28:56.043 --rc genhtml_function_coverage=1 00:28:56.043 --rc genhtml_legend=1 00:28:56.043 --rc geninfo_all_blocks=1 00:28:56.043 --rc geninfo_unexecuted_blocks=1 00:28:56.043 00:28:56.043 ' 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.043 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.044 15:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.484 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:01.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:01.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:01.485 Found net devices under 0000:86:00.0: cvl_0_0 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:01.485 Found net devices under 0000:86:00.1: cvl_0_1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.485 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.486 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:29:01.764 00:29:01.764 --- 10.0.0.2 ping statistics --- 00:29:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.764 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:01.764 00:29:01.764 --- 10.0.0.1 ping statistics --- 00:29:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.764 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2350994 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2350994 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2350994 ']' 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.764 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:01.764 [2024-11-20 15:38:05.561339] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.764 [2024-11-20 15:38:05.562229] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:29:01.765 [2024-11-20 15:38:05.562262] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.765 [2024-11-20 15:38:05.626173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.765 [2024-11-20 15:38:05.668996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.765 [2024-11-20 15:38:05.669030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.765 [2024-11-20 15:38:05.669037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.765 [2024-11-20 15:38:05.669044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.765 [2024-11-20 15:38:05.669049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.023 [2024-11-20 15:38:05.673965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.023 [2024-11-20 15:38:05.674004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.023 [2024-11-20 15:38:05.674005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.023 [2024-11-20 15:38:05.740714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.023 [2024-11-20 15:38:05.740732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.023 [2024-11-20 15:38:05.741280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:02.023 [2024-11-20 15:38:05.741490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.023 15:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:02.282 [2024-11-20 15:38:05.986682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.282 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:02.541 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:02.541 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:02.801 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:02.801 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:02.801 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:03.061 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5844180e-8568-44f2-8a5f-ce4dd82c94e9 00:29:03.061 15:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5844180e-8568-44f2-8a5f-ce4dd82c94e9 lvol 20 00:29:03.319 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f89001b3-416f-48a6-8e82-588a9859b9c4 00:29:03.319 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:03.578 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f89001b3-416f-48a6-8e82-588a9859b9c4 00:29:03.578 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:03.837 [2024-11-20 15:38:07.650604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.837 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.096 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2351275 00:29:04.096 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:04.096 15:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:05.030 15:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f89001b3-416f-48a6-8e82-588a9859b9c4 MY_SNAPSHOT 00:29:05.288 15:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=09308421-bcbc-4e5f-a877-32d006538cf5 00:29:05.288 15:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f89001b3-416f-48a6-8e82-588a9859b9c4 30 00:29:05.546 15:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 09308421-bcbc-4e5f-a877-32d006538cf5 MY_CLONE 00:29:05.804 15:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=796a1342-443f-4d93-b52a-59a6fc86c6a6 00:29:05.804 15:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 796a1342-443f-4d93-b52a-59a6fc86c6a6 00:29:06.369 15:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2351275 00:29:14.479 Initializing NVMe Controllers 00:29:14.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:14.479 Controller IO queue size 128, less than required. 00:29:14.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:14.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:14.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:14.479 Initialization complete. Launching workers. 00:29:14.479 ======================================================== 00:29:14.479 Latency(us) 00:29:14.479 Device Information : IOPS MiB/s Average min max 00:29:14.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12411.40 48.48 10313.36 562.43 79411.49 00:29:14.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12252.00 47.86 10447.70 4814.41 58761.40 00:29:14.479 ======================================================== 00:29:14.479 Total : 24663.40 96.34 10380.09 562.43 79411.49 00:29:14.479 00:29:14.479 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.479 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f89001b3-416f-48a6-8e82-588a9859b9c4 00:29:14.737 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5844180e-8568-44f2-8a5f-ce4dd82c94e9 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.996 rmmod nvme_tcp 00:29:14.996 rmmod nvme_fabrics 00:29:14.996 rmmod nvme_keyring 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2350994 ']' 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2350994 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2350994 ']' 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2350994 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350994 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350994' 00:29:14.996 killing process with pid 2350994 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2350994 00:29:14.996 15:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2350994 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.255 15:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.790 00:29:17.790 real 0m21.752s 00:29:17.790 user 0m55.475s 00:29:17.790 sys 0m9.661s 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:17.790 ************************************ 00:29:17.790 END TEST nvmf_lvol 00:29:17.790 ************************************ 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:17.790 ************************************ 00:29:17.790 START TEST nvmf_lvs_grow 00:29:17.790 ************************************ 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:17.790 * Looking for test storage... 00:29:17.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:17.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.790 --rc genhtml_branch_coverage=1 00:29:17.790 --rc genhtml_function_coverage=1 00:29:17.790 --rc genhtml_legend=1 00:29:17.790 --rc geninfo_all_blocks=1 00:29:17.790 --rc geninfo_unexecuted_blocks=1 00:29:17.790 00:29:17.790 ' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.790 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.791 15:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:24.356 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:24.356 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:24.357 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:24.357 Found net devices under 0000:86:00.0: cvl_0_0 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:24.357 Found net devices under 0000:86:00.1: cvl_0_1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:29:24.357 00:29:24.357 --- 10.0.0.2 ping statistics --- 00:29:24.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.357 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:24.357 00:29:24.357 --- 10.0.0.1 ping statistics --- 00:29:24.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.357 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2356615 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2356615 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2356615 ']' 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.357 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.357 [2024-11-20 15:38:27.422588] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.357 [2024-11-20 15:38:27.423540] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:29:24.357 [2024-11-20 15:38:27.423577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.357 [2024-11-20 15:38:27.504842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.357 [2024-11-20 15:38:27.546244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.358 [2024-11-20 15:38:27.546283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.358 [2024-11-20 15:38:27.546290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.358 [2024-11-20 15:38:27.546297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.358 [2024-11-20 15:38:27.546302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.358 [2024-11-20 15:38:27.546866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.358 [2024-11-20 15:38:27.615461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.358 [2024-11-20 15:38:27.615688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:24.358 [2024-11-20 15:38:27.855497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.358 ************************************ 00:29:24.358 START TEST lvs_grow_clean 00:29:24.358 ************************************ 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.358 15:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:24.358 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:24.358 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:24.617 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=899ac121-4d36-4d19-8bbc-9773312f618c 00:29:24.617 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:24.617 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 899ac121-4d36-4d19-8bbc-9773312f618c lvol 150 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=edf65ca4-c08d-41b3-99d8-8944c1a0da39 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.876 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:25.135 [2024-11-20 15:38:28.931246] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:25.135 [2024-11-20 15:38:28.931369] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:25.135 true 00:29:25.135 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:25.135 15:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:25.394 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:25.394 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:25.653 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 edf65ca4-c08d-41b3-99d8-8944c1a0da39 00:29:25.653 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.912 [2024-11-20 15:38:29.703755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.912 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2357089 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2357089 /var/tmp/bdevperf.sock 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2357089 ']' 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.171 15:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.171 [2024-11-20 15:38:29.951531] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:29:26.171 [2024-11-20 15:38:29.951581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357089 ] 00:29:26.171 [2024-11-20 15:38:30.026877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.171 [2024-11-20 15:38:30.075183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.430 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.430 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:26.430 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:26.689 Nvme0n1 00:29:26.689 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:26.948 [ 00:29:26.948 { 00:29:26.948 "name": "Nvme0n1", 00:29:26.948 "aliases": [ 00:29:26.948 "edf65ca4-c08d-41b3-99d8-8944c1a0da39" 00:29:26.948 ], 00:29:26.948 "product_name": "NVMe disk", 00:29:26.948 "block_size": 4096, 00:29:26.948 "num_blocks": 38912, 00:29:26.948 "uuid": "edf65ca4-c08d-41b3-99d8-8944c1a0da39", 00:29:26.948 "numa_id": 1, 00:29:26.948 "assigned_rate_limits": { 00:29:26.948 "rw_ios_per_sec": 0, 00:29:26.948 "rw_mbytes_per_sec": 0, 00:29:26.948 "r_mbytes_per_sec": 0, 00:29:26.948 "w_mbytes_per_sec": 0 00:29:26.948 }, 00:29:26.948 "claimed": false, 00:29:26.948 "zoned": false, 00:29:26.948 "supported_io_types": { 00:29:26.948 "read": true, 00:29:26.948 "write": true, 00:29:26.948 "unmap": true, 00:29:26.948 "flush": true, 00:29:26.948 "reset": true, 00:29:26.948 "nvme_admin": true, 00:29:26.948 "nvme_io": true, 00:29:26.948 "nvme_io_md": false, 00:29:26.948 "write_zeroes": true, 00:29:26.948 "zcopy": false, 00:29:26.948 "get_zone_info": false, 00:29:26.948 "zone_management": false, 00:29:26.948 "zone_append": false, 00:29:26.948 "compare": true, 00:29:26.948 "compare_and_write": true, 00:29:26.948 "abort": true, 00:29:26.948 "seek_hole": false, 00:29:26.948 "seek_data": false, 00:29:26.948 "copy": true, 00:29:26.948 "nvme_iov_md": false 00:29:26.948 }, 00:29:26.948 "memory_domains": [ 00:29:26.948 { 00:29:26.948 "dma_device_id": "system", 00:29:26.948 "dma_device_type": 1 00:29:26.948 } 00:29:26.948 ], 00:29:26.948 "driver_specific": { 00:29:26.948 "nvme": [ 00:29:26.948 { 00:29:26.948 "trid": { 00:29:26.948 "trtype": "TCP", 00:29:26.948 "adrfam": "IPv4", 00:29:26.948 "traddr": "10.0.0.2", 00:29:26.948 "trsvcid": "4420", 00:29:26.948 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.948 }, 00:29:26.948 "ctrlr_data": { 00:29:26.948 "cntlid": 1, 00:29:26.948 "vendor_id": "0x8086", 00:29:26.948 "model_number": "SPDK bdev Controller", 00:29:26.948 "serial_number": "SPDK0", 00:29:26.948 "firmware_revision": "25.01", 00:29:26.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.948 "oacs": { 00:29:26.948 "security": 0, 00:29:26.948 "format": 0, 00:29:26.948 "firmware": 0, 00:29:26.948 "ns_manage": 0 00:29:26.948 }, 00:29:26.949 "multi_ctrlr": true, 00:29:26.949 "ana_reporting": false 00:29:26.949 }, 00:29:26.949 "vs": { 00:29:26.949 "nvme_version": "1.3" 00:29:26.949 }, 00:29:26.949 "ns_data": { 00:29:26.949 "id": 1, 00:29:26.949 "can_share": true 00:29:26.949 } 00:29:26.949 } 00:29:26.949 ], 00:29:26.949 "mp_policy": "active_passive" 00:29:26.949 } 00:29:26.949 } 00:29:26.949 ] 00:29:26.949 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2357127 00:29:26.949 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.949 15:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:26.949 Running I/O for 10 seconds... 00:29:27.886 Latency(us) 00:29:27.886 [2024-11-20T14:38:31.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.886 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:27.886 [2024-11-20T14:38:31.794Z] =================================================================================================================== 00:29:27.886 [2024-11-20T14:38:31.794Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:27.886 00:29:28.823 15:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:29.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.081 Nvme0n1 : 2.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:29.081 [2024-11-20T14:38:32.989Z] =================================================================================================================== 00:29:29.081 [2024-11-20T14:38:32.989Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:29.081 00:29:29.081 true 00:29:29.081 15:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:29.081 15:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:29.339 15:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:29.339 15:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:29.339 15:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2357127 00:29:29.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.904 Nvme0n1 : 3.00 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:29.904 [2024-11-20T14:38:33.812Z] =================================================================================================================== 00:29:29.904 [2024-11-20T14:38:33.812Z] Total : 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:29.904 00:29:31.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.279 Nvme0n1 : 4.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:31.279 [2024-11-20T14:38:35.187Z] =================================================================================================================== 00:29:31.279 [2024-11-20T14:38:35.187Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:31.279 00:29:31.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.845 Nvme0n1 : 5.00 22736.40 88.81 0.00 0.00 0.00 0.00 0.00 00:29:31.845 [2024-11-20T14:38:35.753Z] =================================================================================================================== 00:29:31.845 [2024-11-20T14:38:35.753Z] Total : 22736.40 88.81 0.00 0.00 0.00 0.00 0.00 00:29:31.845 00:29:33.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.220 Nvme0n1 : 6.00 22799.33 89.06 0.00 0.00 0.00 0.00 0.00 00:29:33.220 [2024-11-20T14:38:37.128Z] =================================================================================================================== 00:29:33.220 [2024-11-20T14:38:37.128Z] Total : 22799.33 89.06 0.00 0.00 0.00 0.00 0.00 00:29:33.220 00:29:34.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.157 Nvme0n1 : 7.00 22835.29 89.20 0.00 0.00 0.00 0.00 0.00 00:29:34.157 [2024-11-20T14:38:38.065Z] =================================================================================================================== 00:29:34.157 [2024-11-20T14:38:38.065Z] Total : 22835.29 89.20 0.00 0.00 0.00 0.00 0.00 00:29:34.157 00:29:35.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.093 Nvme0n1 : 8.00 22868.25 89.33 0.00 0.00 0.00 0.00 0.00 00:29:35.093 [2024-11-20T14:38:39.001Z] =================================================================================================================== 00:29:35.093 [2024-11-20T14:38:39.001Z] Total : 22868.25 89.33 0.00 0.00 0.00 0.00 0.00 00:29:35.093 00:29:36.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.030 Nvme0n1 : 9.00 22881.44 89.38 0.00 0.00 0.00 0.00 0.00 00:29:36.030 [2024-11-20T14:38:39.938Z] =================================================================================================================== 00:29:36.030 [2024-11-20T14:38:39.938Z] Total : 22881.44 89.38 0.00 0.00 0.00 0.00 0.00 00:29:36.030 00:29:36.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.965 Nvme0n1 : 10.00 22892.00 89.42 0.00 0.00 0.00 0.00 0.00 00:29:36.965 [2024-11-20T14:38:40.873Z] =================================================================================================================== 00:29:36.965 [2024-11-20T14:38:40.873Z] Total : 22892.00 89.42 0.00 0.00 0.00 0.00 0.00 00:29:36.965 00:29:36.965 00:29:36.965 Latency(us) 00:29:36.965 [2024-11-20T14:38:40.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.965 Nvme0n1 : 10.00 22896.51 89.44 0.00 0.00 5587.45 3177.07 25530.55 00:29:36.965 [2024-11-20T14:38:40.873Z] =================================================================================================================== 00:29:36.965 [2024-11-20T14:38:40.873Z] Total : 22896.51 89.44 0.00 0.00 5587.45 3177.07 25530.55 00:29:36.965 { 00:29:36.965 "results": [ 00:29:36.965 { 00:29:36.965 "job": "Nvme0n1", 00:29:36.965 "core_mask": "0x2", 00:29:36.965 "workload": "randwrite", 00:29:36.965 "status": "finished", 00:29:36.965 "queue_depth": 128, 00:29:36.965 "io_size": 4096, 00:29:36.965 "runtime": 10.003622, 00:29:36.965 "iops": 22896.506885206178, 00:29:36.965 "mibps": 89.43948002033663, 00:29:36.965 "io_failed": 0, 00:29:36.965 "io_timeout": 0, 00:29:36.965 "avg_latency_us": 5587.453660474433, 00:29:36.965 "min_latency_us": 3177.0713043478263, 00:29:36.965 "max_latency_us": 25530.54608695652 00:29:36.965 } 00:29:36.965 ], 00:29:36.965 "core_count": 1 00:29:36.965 } 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2357089 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2357089 ']' 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2357089 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2357089 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2357089' 00:29:36.965 killing process with pid 2357089 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2357089 00:29:36.965 Received shutdown signal, test time was about 10.000000 seconds 00:29:36.965 00:29:36.965 Latency(us) 00:29:36.965 [2024-11-20T14:38:40.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.965 [2024-11-20T14:38:40.873Z] =================================================================================================================== 00:29:36.965 [2024-11-20T14:38:40.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.965 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2357089 00:29:37.223 15:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.481 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.740 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:37.740 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:37.740 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:37.740 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:37.740 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:37.998 [2024-11-20 15:38:41.779336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:37.998 15:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:38.257 request: 00:29:38.257 { 00:29:38.257 "uuid": "899ac121-4d36-4d19-8bbc-9773312f618c", 00:29:38.257 "method": "bdev_lvol_get_lvstores", 00:29:38.257 "req_id": 1 00:29:38.257 } 00:29:38.257 Got JSON-RPC error response 00:29:38.257 response: 00:29:38.257 { 00:29:38.257 "code": -19, 00:29:38.257 "message": "No such device" 00:29:38.257 } 00:29:38.257 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:38.258 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.258 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.258 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.258 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.517 aio_bdev 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev edf65ca4-c08d-41b3-99d8-8944c1a0da39 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=edf65ca4-c08d-41b3-99d8-8944c1a0da39 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:38.517 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.775 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b edf65ca4-c08d-41b3-99d8-8944c1a0da39 -t 2000 00:29:38.775 [ 00:29:38.775 { 00:29:38.775 "name": "edf65ca4-c08d-41b3-99d8-8944c1a0da39", 00:29:38.775 "aliases": [ 00:29:38.775 "lvs/lvol" 00:29:38.775 ], 00:29:38.775 "product_name": "Logical Volume", 00:29:38.775 "block_size": 4096, 00:29:38.775 "num_blocks": 38912, 00:29:38.775 "uuid": "edf65ca4-c08d-41b3-99d8-8944c1a0da39", 00:29:38.775 "assigned_rate_limits": { 00:29:38.775 "rw_ios_per_sec": 0, 00:29:38.775 "rw_mbytes_per_sec": 0, 00:29:38.775 "r_mbytes_per_sec": 0, 00:29:38.775 "w_mbytes_per_sec": 0 00:29:38.775 }, 00:29:38.775 "claimed": false, 00:29:38.775 "zoned": false, 00:29:38.775 "supported_io_types": { 00:29:38.775 "read": true, 00:29:38.775 "write": true, 00:29:38.775 "unmap": true, 00:29:38.775 "flush": false, 00:29:38.775 "reset": true, 00:29:38.775 "nvme_admin": false, 00:29:38.775 "nvme_io": false, 00:29:38.775 "nvme_io_md": false, 00:29:38.775 "write_zeroes": true, 00:29:38.775 "zcopy": false, 00:29:38.775 "get_zone_info": false, 00:29:38.775 "zone_management": false, 00:29:38.775 "zone_append": false, 00:29:38.775 "compare": false, 00:29:38.775 "compare_and_write": false, 00:29:38.775 "abort": false, 00:29:38.775 "seek_hole": true, 00:29:38.775 "seek_data": true, 00:29:38.775 "copy": false, 00:29:38.775 "nvme_iov_md": false 00:29:38.775 }, 00:29:38.775 "driver_specific": { 00:29:38.775 "lvol": { 00:29:38.775 "lvol_store_uuid": "899ac121-4d36-4d19-8bbc-9773312f618c", 00:29:38.775 "base_bdev": "aio_bdev", 00:29:38.775 "thin_provision": false, 00:29:38.775 "num_allocated_clusters": 38, 00:29:38.775 "snapshot": false, 00:29:38.775 "clone": false, 00:29:38.775 "esnap_clone": false 00:29:38.775 } 00:29:38.775 } 00:29:38.775 } 00:29:38.775 ] 00:29:38.775 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:38.775 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:38.775 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.034 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.034 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:39.034 15:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.292 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.292 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete edf65ca4-c08d-41b3-99d8-8944c1a0da39 00:29:39.551 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 899ac121-4d36-4d19-8bbc-9773312f618c 00:29:39.551 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:39.810 00:29:39.810 real 0m15.710s 00:29:39.810 user 0m15.238s 00:29:39.810 sys 0m1.473s 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 ************************************ 00:29:39.810 END TEST lvs_grow_clean 00:29:39.810 ************************************ 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:39.810 ************************************ 00:29:39.810 START TEST lvs_grow_dirty 00:29:39.810 ************************************ 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:39.810 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:40.069 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:40.069 15:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:40.327 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:40.327 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:40.327 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:40.586 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:40.586 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:40.586 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 lvol 150 00:29:40.586 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:40.586 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.844 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:40.844 [2024-11-20 15:38:44.651264] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:40.844 [2024-11-20 15:38:44.651393] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:40.844 true 00:29:40.844 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:40.844 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:41.124 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:41.124 15:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.443 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:41.443 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.712 [2024-11-20 15:38:45.391689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2359599 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2359599 /var/tmp/bdevperf.sock 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2359599 ']' 00:29:41.712 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.713 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.713 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.713 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.713 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.972 [2024-11-20 15:38:45.661486] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:29:41.972 [2024-11-20 15:38:45.661536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359599 ] 00:29:41.972 [2024-11-20 15:38:45.737457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.972 [2024-11-20 15:38:45.780313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.972 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.972 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:41.972 15:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:42.541 Nvme0n1 00:29:42.541 15:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:42.800 [ 00:29:42.800 { 00:29:42.800 "name": "Nvme0n1", 00:29:42.800 "aliases": [ 00:29:42.800 "db63c10c-295c-43b9-8e5d-cc619b41501d" 00:29:42.800 ], 00:29:42.800 "product_name": "NVMe disk", 00:29:42.800 "block_size": 4096, 00:29:42.800 "num_blocks": 38912, 00:29:42.800 "uuid": "db63c10c-295c-43b9-8e5d-cc619b41501d", 00:29:42.800 "numa_id": 1, 00:29:42.800 "assigned_rate_limits": { 00:29:42.800 "rw_ios_per_sec": 0, 00:29:42.800 "rw_mbytes_per_sec": 0, 00:29:42.800 "r_mbytes_per_sec": 0, 00:29:42.800 "w_mbytes_per_sec": 0 00:29:42.800 }, 00:29:42.800 "claimed": false, 00:29:42.800 "zoned": false, 00:29:42.800 "supported_io_types": { 00:29:42.800 "read": true, 00:29:42.800 "write": true, 00:29:42.800 "unmap": true, 00:29:42.800 "flush": true, 00:29:42.800 "reset": true, 00:29:42.800 "nvme_admin": true, 00:29:42.800 "nvme_io": true, 00:29:42.800 "nvme_io_md": false, 00:29:42.800 "write_zeroes": true, 00:29:42.800 "zcopy": false, 00:29:42.800 "get_zone_info": false, 00:29:42.800 "zone_management": false, 00:29:42.800 "zone_append": false, 00:29:42.800 "compare": true, 00:29:42.800 "compare_and_write": true, 00:29:42.800 "abort": true, 00:29:42.800 "seek_hole": false, 00:29:42.800 "seek_data": false, 00:29:42.800 "copy": true, 00:29:42.800 "nvme_iov_md": false 00:29:42.800 }, 00:29:42.800 "memory_domains": [ 00:29:42.800 { 00:29:42.800 "dma_device_id": "system", 00:29:42.800 "dma_device_type": 1 00:29:42.800 } 00:29:42.800 ], 00:29:42.800 "driver_specific": { 00:29:42.800 "nvme": [ 00:29:42.800 { 00:29:42.800 "trid": { 00:29:42.800 "trtype": "TCP", 00:29:42.801 "adrfam": "IPv4", 00:29:42.801 "traddr": "10.0.0.2", 00:29:42.801 "trsvcid": "4420", 00:29:42.801 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:42.801 }, 00:29:42.801 "ctrlr_data": { 00:29:42.801 "cntlid": 1, 00:29:42.801 "vendor_id": "0x8086", 00:29:42.801 "model_number": "SPDK bdev Controller", 00:29:42.801 "serial_number": "SPDK0", 00:29:42.801 "firmware_revision": "25.01", 00:29:42.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.801 "oacs": { 00:29:42.801 "security": 0, 00:29:42.801 "format": 0, 00:29:42.801 "firmware": 0, 00:29:42.801 "ns_manage": 0 00:29:42.801 }, 00:29:42.801 "multi_ctrlr": true, 00:29:42.801 "ana_reporting": false 00:29:42.801 }, 00:29:42.801 "vs": { 00:29:42.801 "nvme_version": "1.3" 00:29:42.801 }, 00:29:42.801 "ns_data": { 00:29:42.801 "id": 1, 00:29:42.801 "can_share": true 00:29:42.801 } 00:29:42.801 } 00:29:42.801 ], 00:29:42.801 "mp_policy": "active_passive" 00:29:42.801 } 00:29:42.801 } 00:29:42.801 ] 00:29:42.801 15:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2359715 00:29:42.801 15:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.801 15:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:42.801 Running I/O for 10 seconds... 00:29:43.737 Latency(us) 00:29:43.737 [2024-11-20T14:38:47.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.737 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:43.737 [2024-11-20T14:38:47.645Z] =================================================================================================================== 00:29:43.737 [2024-11-20T14:38:47.645Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:43.737 00:29:44.673 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:44.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.932 Nvme0n1 : 2.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:44.932 [2024-11-20T14:38:48.840Z] =================================================================================================================== 00:29:44.932 [2024-11-20T14:38:48.840Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:44.932 00:29:44.932 true 00:29:44.932 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:44.932 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:45.190 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:45.190 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:45.190 15:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2359715 00:29:45.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.758 Nvme0n1 : 3.00 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:45.758 [2024-11-20T14:38:49.666Z] =================================================================================================================== 00:29:45.758 [2024-11-20T14:38:49.666Z] Total : 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:45.758 00:29:47.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.136 Nvme0n1 : 4.00 22694.00 88.65 0.00 0.00 0.00 0.00 0.00 00:29:47.136 [2024-11-20T14:38:51.044Z] =================================================================================================================== 00:29:47.136 [2024-11-20T14:38:51.044Z] Total : 22694.00 88.65 0.00 0.00 0.00 0.00 0.00 00:29:47.136 00:29:48.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.073 Nvme0n1 : 5.00 22775.00 88.96 0.00 0.00 0.00 0.00 0.00 00:29:48.073 [2024-11-20T14:38:51.981Z] =================================================================================================================== 00:29:48.073 [2024-11-20T14:38:51.981Z] Total : 22775.00 88.96 0.00 0.00 0.00 0.00 0.00 00:29:48.073 00:29:49.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.009 Nvme0n1 : 6.00 22831.50 89.19 0.00 0.00 0.00 0.00 0.00 00:29:49.009 [2024-11-20T14:38:52.917Z] =================================================================================================================== 00:29:49.009 [2024-11-20T14:38:52.917Z] Total : 22831.50 89.19 0.00 0.00 0.00 0.00 0.00 00:29:49.009 00:29:49.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.943 Nvme0n1 : 7.00 22856.14 89.28 0.00 0.00 0.00 0.00 0.00 00:29:49.943 [2024-11-20T14:38:53.851Z] =================================================================================================================== 00:29:49.943 [2024-11-20T14:38:53.851Z] Total : 22856.14 89.28 0.00 0.00 0.00 0.00 0.00 00:29:49.943 00:29:50.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.878 Nvme0n1 : 8.00 22888.38 89.41 0.00 0.00 0.00 0.00 0.00 00:29:50.878 [2024-11-20T14:38:54.786Z] =================================================================================================================== 00:29:50.878 [2024-11-20T14:38:54.786Z] Total : 22888.38 89.41 0.00 0.00 0.00 0.00 0.00 00:29:50.878 00:29:51.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.812 Nvme0n1 : 9.00 22913.44 89.51 0.00 0.00 0.00 0.00 0.00 00:29:51.812 [2024-11-20T14:38:55.720Z] =================================================================================================================== 00:29:51.812 [2024-11-20T14:38:55.720Z] Total : 22913.44 89.51 0.00 0.00 0.00 0.00 0.00 00:29:51.812 00:29:52.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.746 Nvme0n1 : 10.00 22933.50 89.58 0.00 0.00 0.00 0.00 0.00 00:29:52.746 [2024-11-20T14:38:56.654Z] =================================================================================================================== 00:29:52.746 [2024-11-20T14:38:56.654Z] Total : 22933.50 89.58 0.00 0.00 0.00 0.00 0.00 00:29:52.746 00:29:52.746 00:29:52.746 Latency(us) 00:29:52.746 [2024-11-20T14:38:56.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.746 Nvme0n1 : 10.00 22937.43 89.60 0.00 0.00 5577.38 3162.82 25872.47 00:29:52.746 [2024-11-20T14:38:56.654Z] =================================================================================================================== 00:29:52.746 [2024-11-20T14:38:56.654Z] Total : 22937.43 89.60 0.00 0.00 5577.38 3162.82 25872.47 00:29:52.746 { 00:29:52.746 "results": [ 00:29:52.746 { 00:29:52.746 "job": "Nvme0n1", 00:29:52.746 "core_mask": "0x2", 00:29:52.746 "workload": "randwrite", 00:29:52.746 "status": "finished", 00:29:52.746 "queue_depth": 128, 00:29:52.746 "io_size": 4096, 00:29:52.746 "runtime": 10.003866, 00:29:52.746 "iops": 22937.432388638554, 00:29:52.746 "mibps": 89.59934526811935, 00:29:52.746 "io_failed": 0, 00:29:52.746 "io_timeout": 0, 00:29:52.746 "avg_latency_us": 5577.382660680921, 00:29:52.746 "min_latency_us": 3162.824347826087, 00:29:52.746 "max_latency_us": 25872.47304347826 00:29:52.746 } 00:29:52.746 ], 00:29:52.746 "core_count": 1 00:29:52.746 } 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2359599 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2359599 ']' 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2359599 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359599 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359599' 00:29:53.005 killing process with pid 2359599 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2359599 00:29:53.005 Received shutdown signal, test time was about 10.000000 seconds 00:29:53.005 00:29:53.005 Latency(us) 00:29:53.005 [2024-11-20T14:38:56.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.005 [2024-11-20T14:38:56.913Z] =================================================================================================================== 00:29:53.005 [2024-11-20T14:38:56.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2359599 00:29:53.005 15:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.264 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.523 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:53.523 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2356615 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2356615 00:29:53.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2356615 Killed "${NVMF_APP[@]}" "$@" 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2361547 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2361547 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2361547 ']' 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.782 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.782 [2024-11-20 15:38:57.562534] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.782 [2024-11-20 15:38:57.563454] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:29:53.782 [2024-11-20 15:38:57.563489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.782 [2024-11-20 15:38:57.643321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.782 [2024-11-20 15:38:57.684266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.782 [2024-11-20 15:38:57.684301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.782 [2024-11-20 15:38:57.684308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.782 [2024-11-20 15:38:57.684313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.782 [2024-11-20 15:38:57.684319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.782 [2024-11-20 15:38:57.684869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.041 [2024-11-20 15:38:57.751290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.041 [2024-11-20 15:38:57.751527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.041 15:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:54.299 [2024-11-20 15:38:57.994246] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:54.299 [2024-11-20 15:38:57.994440] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:54.299 [2024-11-20 15:38:57.994521] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:54.299 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.558 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db63c10c-295c-43b9-8e5d-cc619b41501d -t 2000 00:29:54.558 [ 00:29:54.558 { 00:29:54.558 "name": "db63c10c-295c-43b9-8e5d-cc619b41501d", 00:29:54.558 "aliases": [ 00:29:54.558 "lvs/lvol" 00:29:54.558 ], 00:29:54.558 "product_name": "Logical Volume", 00:29:54.558 "block_size": 4096, 00:29:54.558 "num_blocks": 38912, 00:29:54.558 "uuid": "db63c10c-295c-43b9-8e5d-cc619b41501d", 00:29:54.558 "assigned_rate_limits": { 00:29:54.558 "rw_ios_per_sec": 0, 00:29:54.558 "rw_mbytes_per_sec": 0, 00:29:54.558 "r_mbytes_per_sec": 0, 00:29:54.558 "w_mbytes_per_sec": 0 00:29:54.558 }, 00:29:54.558 "claimed": false, 00:29:54.558 "zoned": false, 00:29:54.558 "supported_io_types": { 00:29:54.558 "read": true, 00:29:54.558 "write": true, 00:29:54.558 "unmap": true, 00:29:54.558 "flush": false, 00:29:54.558 "reset": true, 00:29:54.558 "nvme_admin": false, 00:29:54.558 "nvme_io": false, 00:29:54.558 "nvme_io_md": false, 00:29:54.558 "write_zeroes": true, 00:29:54.558 "zcopy": false, 00:29:54.558 "get_zone_info": false, 00:29:54.558 "zone_management": false, 00:29:54.558 "zone_append": false, 00:29:54.558 "compare": false, 00:29:54.558 "compare_and_write": false, 00:29:54.558 "abort": false, 00:29:54.558 "seek_hole": true, 00:29:54.558 "seek_data": true, 00:29:54.558 "copy": false, 00:29:54.558 "nvme_iov_md": false 00:29:54.558 }, 00:29:54.558 "driver_specific": { 00:29:54.558 "lvol": { 00:29:54.558 "lvol_store_uuid": "d02e77e5-523a-44a2-9f75-7ba3c16ed0b5", 00:29:54.558 "base_bdev": "aio_bdev", 00:29:54.558 "thin_provision": false, 00:29:54.558 "num_allocated_clusters": 38, 00:29:54.558 "snapshot": false, 00:29:54.558 "clone": false, 00:29:54.558 "esnap_clone": false 00:29:54.558 } 00:29:54.558 } 00:29:54.558 } 00:29:54.558 ] 00:29:54.558 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:54.558 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:54.558 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:54.817 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:54.817 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:54.817 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:55.076 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:55.076 15:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:55.335 [2024-11-20 15:38:59.013348] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:55.335 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:55.594 request: 00:29:55.594 { 00:29:55.594 "uuid": "d02e77e5-523a-44a2-9f75-7ba3c16ed0b5", 00:29:55.594 "method": "bdev_lvol_get_lvstores", 00:29:55.594 "req_id": 1 00:29:55.594 } 00:29:55.594 Got JSON-RPC error response 00:29:55.594 response: 00:29:55.594 { 00:29:55.594 "code": -19, 00:29:55.594 "message": "No such device" 00:29:55.594 } 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:55.594 aio_bdev 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:55.594 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:55.852 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db63c10c-295c-43b9-8e5d-cc619b41501d -t 2000 00:29:56.112 [ 00:29:56.112 { 00:29:56.112 "name": "db63c10c-295c-43b9-8e5d-cc619b41501d", 00:29:56.112 "aliases": [ 00:29:56.112 "lvs/lvol" 00:29:56.112 ], 00:29:56.112 "product_name": "Logical Volume", 00:29:56.112 "block_size": 4096, 00:29:56.112 "num_blocks": 38912, 00:29:56.112 "uuid": "db63c10c-295c-43b9-8e5d-cc619b41501d", 00:29:56.112 "assigned_rate_limits": { 00:29:56.112 "rw_ios_per_sec": 0, 00:29:56.112 "rw_mbytes_per_sec": 0, 00:29:56.112 "r_mbytes_per_sec": 0, 00:29:56.112 "w_mbytes_per_sec": 0 00:29:56.112 }, 00:29:56.112 "claimed": false, 00:29:56.112 "zoned": false, 00:29:56.112 "supported_io_types": { 00:29:56.112 "read": true, 00:29:56.112 "write": true, 00:29:56.112 "unmap": true, 00:29:56.112 "flush": false, 00:29:56.112 "reset": true, 00:29:56.112 "nvme_admin": false, 00:29:56.112 "nvme_io": false, 00:29:56.112 "nvme_io_md": false, 00:29:56.112 "write_zeroes": true, 00:29:56.112 "zcopy": false, 00:29:56.112 "get_zone_info": false, 00:29:56.112 "zone_management": false, 00:29:56.112 "zone_append": false, 00:29:56.112 "compare": false, 00:29:56.112 "compare_and_write": false, 00:29:56.112 "abort": false, 00:29:56.112 "seek_hole": true, 00:29:56.112 "seek_data": true, 00:29:56.112 "copy": false, 00:29:56.112 "nvme_iov_md": false 00:29:56.112 }, 00:29:56.112 "driver_specific": { 00:29:56.112 "lvol": { 00:29:56.112 "lvol_store_uuid": "d02e77e5-523a-44a2-9f75-7ba3c16ed0b5", 00:29:56.112 "base_bdev": "aio_bdev", 00:29:56.112 "thin_provision": false, 00:29:56.112 "num_allocated_clusters": 38, 00:29:56.112 "snapshot": false, 00:29:56.112 "clone": false, 00:29:56.112 "esnap_clone": false 00:29:56.112 } 00:29:56.112 } 00:29:56.112 } 00:29:56.112 ] 00:29:56.112 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:56.112 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:56.112 15:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:56.371 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:56.371 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:56.371 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:56.371 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:56.371 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db63c10c-295c-43b9-8e5d-cc619b41501d 00:29:56.630 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d02e77e5-523a-44a2-9f75-7ba3c16ed0b5 00:29:56.889 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:57.148 00:29:57.148 real 0m17.185s 00:29:57.148 user 0m34.528s 00:29:57.148 sys 0m3.927s 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:57.148 ************************************ 00:29:57.148 END TEST lvs_grow_dirty 00:29:57.148 ************************************ 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:57.148 nvmf_trace.0 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.148 15:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.148 rmmod nvme_tcp 00:29:57.148 rmmod nvme_fabrics 00:29:57.148 rmmod nvme_keyring 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2361547 ']' 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2361547 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2361547 ']' 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2361547 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.148 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361547 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361547' 00:29:57.408 killing process with pid 2361547 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2361547 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2361547 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.408 15:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.943 00:29:59.943 real 0m42.107s 00:29:59.943 user 0m52.310s 00:29:59.943 sys 0m10.285s 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.943 ************************************ 00:29:59.943 END TEST nvmf_lvs_grow 00:29:59.943 ************************************ 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.943 ************************************ 00:29:59.943 START TEST nvmf_bdev_io_wait 00:29:59.943 ************************************ 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:59.943 * Looking for test storage... 00:29:59.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.943 --rc genhtml_branch_coverage=1 00:29:59.943 --rc genhtml_function_coverage=1 00:29:59.943 --rc genhtml_legend=1 00:29:59.943 --rc geninfo_all_blocks=1 00:29:59.943 --rc geninfo_unexecuted_blocks=1 00:29:59.943 00:29:59.943 ' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.943 --rc genhtml_branch_coverage=1 00:29:59.943 --rc genhtml_function_coverage=1 00:29:59.943 --rc genhtml_legend=1 00:29:59.943 --rc geninfo_all_blocks=1 00:29:59.943 --rc geninfo_unexecuted_blocks=1 00:29:59.943 00:29:59.943 ' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.943 --rc genhtml_branch_coverage=1 00:29:59.943 --rc genhtml_function_coverage=1 00:29:59.943 --rc genhtml_legend=1 00:29:59.943 --rc geninfo_all_blocks=1 00:29:59.943 --rc geninfo_unexecuted_blocks=1 00:29:59.943 00:29:59.943 ' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.943 --rc genhtml_branch_coverage=1 00:29:59.943 --rc genhtml_function_coverage=1 00:29:59.943 --rc genhtml_legend=1 00:29:59.943 --rc geninfo_all_blocks=1 00:29:59.943 --rc geninfo_unexecuted_blocks=1 00:29:59.943 00:29:59.943 ' 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.943 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.944 15:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:06.509 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:06.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:06.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:06.510 Found net devices under 0000:86:00.0: cvl_0_0 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:06.510 Found net devices under 0000:86:00.1: cvl_0_1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:30:06.510 00:30:06.510 --- 10.0.0.2 ping statistics --- 00:30:06.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.510 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:30:06.510 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:30:06.510 00:30:06.510 --- 10.0.0.1 ping statistics --- 00:30:06.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.511 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2365997 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2365997 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2365997 ']' 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 [2024-11-20 15:39:09.630895] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.511 [2024-11-20 15:39:09.631930] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:06.511 [2024-11-20 15:39:09.631985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.511 [2024-11-20 15:39:09.713045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.511 [2024-11-20 15:39:09.756227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.511 [2024-11-20 15:39:09.756267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.511 [2024-11-20 15:39:09.756274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.511 [2024-11-20 15:39:09.756280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.511 [2024-11-20 15:39:09.756286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.511 [2024-11-20 15:39:09.757780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.511 [2024-11-20 15:39:09.757889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.511 [2024-11-20 15:39:09.757923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.511 [2024-11-20 15:39:09.757923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.511 [2024-11-20 15:39:09.758407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 [2024-11-20 15:39:09.897631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.511 [2024-11-20 15:39:09.897771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:06.511 [2024-11-20 15:39:09.898179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:06.511 [2024-11-20 15:39:09.898358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 [2024-11-20 15:39:09.910826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 Malloc0 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.511 [2024-11-20 15:39:09.982906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2366119 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2366122 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.511 { 00:30:06.511 "params": { 00:30:06.511 "name": "Nvme$subsystem", 00:30:06.511 "trtype": "$TEST_TRANSPORT", 00:30:06.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.511 "adrfam": "ipv4", 00:30:06.511 "trsvcid": "$NVMF_PORT", 00:30:06.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.511 "hdgst": ${hdgst:-false}, 00:30:06.511 "ddgst": ${ddgst:-false} 00:30:06.511 }, 00:30:06.511 "method": "bdev_nvme_attach_controller" 00:30:06.511 } 00:30:06.511 EOF 00:30:06.511 )") 00:30:06.511 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2366125 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.512 { 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme$subsystem", 00:30:06.512 "trtype": "$TEST_TRANSPORT", 00:30:06.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "$NVMF_PORT", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.512 "hdgst": ${hdgst:-false}, 00:30:06.512 "ddgst": ${ddgst:-false} 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 } 00:30:06.512 EOF 00:30:06.512 )") 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2366129 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.512 { 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme$subsystem", 00:30:06.512 "trtype": "$TEST_TRANSPORT", 00:30:06.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "$NVMF_PORT", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.512 "hdgst": ${hdgst:-false}, 00:30:06.512 "ddgst": ${ddgst:-false} 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 } 00:30:06.512 EOF 00:30:06.512 )") 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.512 { 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme$subsystem", 00:30:06.512 "trtype": "$TEST_TRANSPORT", 00:30:06.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "$NVMF_PORT", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.512 "hdgst": ${hdgst:-false}, 00:30:06.512 "ddgst": ${ddgst:-false} 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 } 00:30:06.512 EOF 00:30:06.512 )") 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2366119 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.512 15:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme1", 00:30:06.512 "trtype": "tcp", 00:30:06.512 "traddr": "10.0.0.2", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "4420", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.512 "hdgst": false, 00:30:06.512 "ddgst": false 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 }' 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme1", 00:30:06.512 "trtype": "tcp", 00:30:06.512 "traddr": "10.0.0.2", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "4420", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.512 "hdgst": false, 00:30:06.512 "ddgst": false 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 }' 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme1", 00:30:06.512 "trtype": "tcp", 00:30:06.512 "traddr": "10.0.0.2", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "4420", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.512 "hdgst": false, 00:30:06.512 "ddgst": false 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 }' 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:06.512 15:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.512 "params": { 00:30:06.512 "name": "Nvme1", 00:30:06.512 "trtype": "tcp", 00:30:06.512 "traddr": "10.0.0.2", 00:30:06.512 "adrfam": "ipv4", 00:30:06.512 "trsvcid": "4420", 00:30:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.512 "hdgst": false, 00:30:06.512 "ddgst": false 00:30:06.512 }, 00:30:06.512 "method": "bdev_nvme_attach_controller" 00:30:06.512 }' 00:30:06.512 [2024-11-20 15:39:10.036268] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:06.512 [2024-11-20 15:39:10.036319] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:06.512 [2024-11-20 15:39:10.037590] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:06.512 [2024-11-20 15:39:10.037649] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:06.512 [2024-11-20 15:39:10.038747] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:06.512 [2024-11-20 15:39:10.038792] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:06.512 [2024-11-20 15:39:10.042361] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:06.512 [2024-11-20 15:39:10.042407] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:06.512 [2024-11-20 15:39:10.234692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.512 [2024-11-20 15:39:10.277799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:06.512 [2024-11-20 15:39:10.331801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.512 [2024-11-20 15:39:10.374468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.512 [2024-11-20 15:39:10.380203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:06.770 [2024-11-20 15:39:10.417288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.770 [2024-11-20 15:39:10.432584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.770 [2024-11-20 15:39:10.475608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:06.770 Running I/O for 1 seconds... 00:30:06.770 Running I/O for 1 seconds... 00:30:06.770 Running I/O for 1 seconds... 00:30:07.028 Running I/O for 1 seconds... 00:30:07.963 13781.00 IOPS, 53.83 MiB/s 00:30:07.963 Latency(us) 00:30:07.963 [2024-11-20T14:39:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.963 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:07.963 Nvme1n1 : 1.01 13845.47 54.08 0.00 0.00 9218.55 3419.27 11169.61 00:30:07.963 [2024-11-20T14:39:11.871Z] =================================================================================================================== 00:30:07.963 [2024-11-20T14:39:11.871Z] Total : 13845.47 54.08 0.00 0.00 9218.55 3419.27 11169.61 00:30:07.963 236616.00 IOPS, 924.28 MiB/s 00:30:07.963 Latency(us) 00:30:07.963 [2024-11-20T14:39:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.963 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:07.963 Nvme1n1 : 1.00 236246.95 922.84 0.00 0.00 538.94 235.07 1531.55 00:30:07.963 [2024-11-20T14:39:11.871Z] =================================================================================================================== 00:30:07.963 [2024-11-20T14:39:11.871Z] Total : 236246.95 922.84 0.00 0.00 538.94 235.07 1531.55 00:30:07.963 10606.00 IOPS, 41.43 MiB/s 00:30:07.963 Latency(us) 00:30:07.963 [2024-11-20T14:39:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.963 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:07.963 Nvme1n1 : 1.01 10659.68 41.64 0.00 0.00 11963.54 4160.11 14360.93 00:30:07.963 [2024-11-20T14:39:11.871Z] =================================================================================================================== 00:30:07.963 [2024-11-20T14:39:11.871Z] Total : 10659.68 41.64 0.00 0.00 11963.54 4160.11 14360.93 00:30:07.963 11027.00 IOPS, 43.07 MiB/s 00:30:07.963 Latency(us) 00:30:07.963 [2024-11-20T14:39:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.963 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:07.963 Nvme1n1 : 1.01 11117.98 43.43 0.00 0.00 11485.07 1688.26 18578.03 00:30:07.963 [2024-11-20T14:39:11.871Z] =================================================================================================================== 00:30:07.963 [2024-11-20T14:39:11.871Z] Total : 11117.98 43.43 0.00 0.00 11485.07 1688.26 18578.03 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2366122 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2366125 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2366129 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:07.963 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.222 rmmod nvme_tcp 00:30:08.222 rmmod nvme_fabrics 00:30:08.222 rmmod nvme_keyring 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2365997 ']' 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2365997 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2365997 ']' 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2365997 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2365997 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2365997' 00:30:08.222 killing process with pid 2365997 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2365997 00:30:08.222 15:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2365997 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.222 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.481 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.481 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.481 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.481 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.481 15:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.383 00:30:10.383 real 0m10.792s 00:30:10.383 user 0m14.824s 00:30:10.383 sys 0m6.491s 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:10.383 ************************************ 00:30:10.383 END TEST nvmf_bdev_io_wait 00:30:10.383 ************************************ 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:10.383 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.384 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:10.384 ************************************ 00:30:10.384 START TEST nvmf_queue_depth 00:30:10.384 ************************************ 00:30:10.384 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:10.643 * Looking for test storage... 00:30:10.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.643 --rc genhtml_branch_coverage=1 00:30:10.643 --rc genhtml_function_coverage=1 00:30:10.643 --rc genhtml_legend=1 00:30:10.643 --rc geninfo_all_blocks=1 00:30:10.643 --rc geninfo_unexecuted_blocks=1 00:30:10.643 00:30:10.643 ' 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.643 --rc genhtml_branch_coverage=1 00:30:10.643 --rc genhtml_function_coverage=1 00:30:10.643 --rc genhtml_legend=1 00:30:10.643 --rc geninfo_all_blocks=1 00:30:10.643 --rc geninfo_unexecuted_blocks=1 00:30:10.643 00:30:10.643 ' 00:30:10.643 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.643 --rc genhtml_branch_coverage=1 00:30:10.643 --rc genhtml_function_coverage=1 00:30:10.643 --rc genhtml_legend=1 00:30:10.643 --rc geninfo_all_blocks=1 00:30:10.643 --rc geninfo_unexecuted_blocks=1 00:30:10.643 00:30:10.644 ' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:10.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.644 --rc genhtml_branch_coverage=1 00:30:10.644 --rc genhtml_function_coverage=1 00:30:10.644 --rc genhtml_legend=1 00:30:10.644 --rc geninfo_all_blocks=1 00:30:10.644 --rc geninfo_unexecuted_blocks=1 00:30:10.644 00:30:10.644 ' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.644 15:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:17.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:17.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.212 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:17.213 Found net devices under 0000:86:00.0: cvl_0_0 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:17.213 Found net devices under 0000:86:00.1: cvl_0_1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:30:17.213 00:30:17.213 --- 10.0.0.2 ping statistics --- 00:30:17.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.213 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:30:17.213 00:30:17.213 --- 10.0.0.1 ping statistics --- 00:30:17.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.213 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2370037 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2370037 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2370037 ']' 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.213 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.213 [2024-11-20 15:39:20.437746] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:17.213 [2024-11-20 15:39:20.438738] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:17.213 [2024-11-20 15:39:20.438779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.213 [2024-11-20 15:39:20.520436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.213 [2024-11-20 15:39:20.561759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.213 [2024-11-20 15:39:20.561797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.214 [2024-11-20 15:39:20.561803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.214 [2024-11-20 15:39:20.561809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.214 [2024-11-20 15:39:20.561814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.214 [2024-11-20 15:39:20.562382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.214 [2024-11-20 15:39:20.629193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.214 [2024-11-20 15:39:20.629406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 [2024-11-20 15:39:20.703070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 Malloc0 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 [2024-11-20 15:39:20.783218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2370151 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2370151 /var/tmp/bdevperf.sock 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2370151 ']' 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.214 15:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.214 [2024-11-20 15:39:20.836050] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:17.214 [2024-11-20 15:39:20.836095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370151 ] 00:30:17.214 [2024-11-20 15:39:20.914867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.214 [2024-11-20 15:39:20.955869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.214 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.214 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:17.214 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.214 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.214 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:17.473 NVMe0n1 00:30:17.473 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.473 15:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:17.473 Running I/O for 10 seconds... 00:30:19.343 11292.00 IOPS, 44.11 MiB/s [2024-11-20T14:39:24.627Z] 11786.50 IOPS, 46.04 MiB/s [2024-11-20T14:39:25.562Z] 11953.00 IOPS, 46.69 MiB/s [2024-11-20T14:39:26.498Z] 12111.50 IOPS, 47.31 MiB/s [2024-11-20T14:39:27.434Z] 12207.40 IOPS, 47.69 MiB/s [2024-11-20T14:39:28.370Z] 12219.33 IOPS, 47.73 MiB/s [2024-11-20T14:39:29.313Z] 12230.43 IOPS, 47.78 MiB/s [2024-11-20T14:39:30.249Z] 12259.50 IOPS, 47.89 MiB/s [2024-11-20T14:39:31.625Z] 12266.56 IOPS, 47.92 MiB/s [2024-11-20T14:39:31.625Z] 12264.80 IOPS, 47.91 MiB/s 00:30:27.717 Latency(us) 00:30:27.717 [2024-11-20T14:39:31.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.717 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:27.717 Verification LBA range: start 0x0 length 0x4000 00:30:27.717 NVMe0n1 : 10.06 12276.27 47.95 0.00 0.00 83113.20 19147.91 53568.56 00:30:27.717 [2024-11-20T14:39:31.625Z] =================================================================================================================== 00:30:27.717 [2024-11-20T14:39:31.625Z] Total : 12276.27 47.95 0.00 0.00 83113.20 19147.91 53568.56 00:30:27.717 { 00:30:27.717 "results": [ 00:30:27.717 { 00:30:27.717 "job": "NVMe0n1", 00:30:27.717 "core_mask": "0x1", 00:30:27.717 "workload": "verify", 00:30:27.717 "status": "finished", 00:30:27.717 "verify_range": { 00:30:27.717 "start": 0, 00:30:27.717 "length": 16384 00:30:27.717 }, 00:30:27.717 "queue_depth": 1024, 00:30:27.717 "io_size": 4096, 00:30:27.717 "runtime": 10.064214, 00:30:27.717 "iops": 12276.269165182695, 00:30:27.717 "mibps": 47.954176426494904, 00:30:27.717 "io_failed": 0, 00:30:27.717 "io_timeout": 0, 00:30:27.717 "avg_latency_us": 83113.19572547582, 00:30:27.717 "min_latency_us": 19147.909565217393, 00:30:27.717 "max_latency_us": 53568.556521739134 00:30:27.717 } 00:30:27.717 ], 00:30:27.717 "core_count": 1 00:30:27.717 } 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2370151 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2370151 ']' 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2370151 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370151 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370151' 00:30:27.717 killing process with pid 2370151 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2370151 00:30:27.717 Received shutdown signal, test time was about 10.000000 seconds 00:30:27.717 00:30:27.717 Latency(us) 00:30:27.717 [2024-11-20T14:39:31.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.717 [2024-11-20T14:39:31.625Z] =================================================================================================================== 00:30:27.717 [2024-11-20T14:39:31.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2370151 00:30:27.717 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.718 rmmod nvme_tcp 00:30:27.718 rmmod nvme_fabrics 00:30:27.718 rmmod nvme_keyring 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2370037 ']' 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2370037 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2370037 ']' 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2370037 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.718 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370037 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370037' 00:30:27.977 killing process with pid 2370037 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2370037 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2370037 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.977 15:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.512 00:30:30.512 real 0m19.635s 00:30:30.512 user 0m22.607s 00:30:30.512 sys 0m6.296s 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:30.512 ************************************ 00:30:30.512 END TEST nvmf_queue_depth 00:30:30.512 ************************************ 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:30.512 ************************************ 00:30:30.512 START TEST nvmf_target_multipath 00:30:30.512 ************************************ 00:30:30.512 15:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:30.512 * Looking for test storage... 00:30:30.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.512 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.512 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.513 --rc genhtml_branch_coverage=1 00:30:30.513 --rc genhtml_function_coverage=1 00:30:30.513 --rc genhtml_legend=1 00:30:30.513 --rc geninfo_all_blocks=1 00:30:30.513 --rc geninfo_unexecuted_blocks=1 00:30:30.513 00:30:30.513 ' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.513 --rc genhtml_branch_coverage=1 00:30:30.513 --rc genhtml_function_coverage=1 00:30:30.513 --rc genhtml_legend=1 00:30:30.513 --rc geninfo_all_blocks=1 00:30:30.513 --rc geninfo_unexecuted_blocks=1 00:30:30.513 00:30:30.513 ' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.513 --rc genhtml_branch_coverage=1 00:30:30.513 --rc genhtml_function_coverage=1 00:30:30.513 --rc genhtml_legend=1 00:30:30.513 --rc geninfo_all_blocks=1 00:30:30.513 --rc geninfo_unexecuted_blocks=1 00:30:30.513 00:30:30.513 ' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.513 --rc genhtml_branch_coverage=1 00:30:30.513 --rc genhtml_function_coverage=1 00:30:30.513 --rc genhtml_legend=1 00:30:30.513 --rc geninfo_all_blocks=1 00:30:30.513 --rc geninfo_unexecuted_blocks=1 00:30:30.513 00:30:30.513 ' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.513 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.514 15:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:37.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:37.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:37.083 Found net devices under 0000:86:00.0: cvl_0_0 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:37.083 Found net devices under 0000:86:00.1: cvl_0_1 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.083 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.084 15:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:30:37.084 00:30:37.084 --- 10.0.0.2 ping statistics --- 00:30:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.084 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:37.084 00:30:37.084 --- 10.0.0.1 ping statistics --- 00:30:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.084 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:37.084 only one NIC for nvmf test 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.084 rmmod nvme_tcp 00:30:37.084 rmmod nvme_fabrics 00:30:37.084 rmmod nvme_keyring 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.084 15:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.461 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.462 00:30:38.462 real 0m8.278s 00:30:38.462 user 0m1.829s 00:30:38.462 sys 0m4.471s 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 ************************************ 00:30:38.462 END TEST nvmf_target_multipath 00:30:38.462 ************************************ 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.462 ************************************ 00:30:38.462 START TEST nvmf_zcopy 00:30:38.462 ************************************ 00:30:38.462 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:38.740 * Looking for test storage... 00:30:38.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.740 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.741 --rc genhtml_branch_coverage=1 00:30:38.741 --rc genhtml_function_coverage=1 00:30:38.741 --rc genhtml_legend=1 00:30:38.741 --rc geninfo_all_blocks=1 00:30:38.741 --rc geninfo_unexecuted_blocks=1 00:30:38.741 00:30:38.741 ' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.741 --rc genhtml_branch_coverage=1 00:30:38.741 --rc genhtml_function_coverage=1 00:30:38.741 --rc genhtml_legend=1 00:30:38.741 --rc geninfo_all_blocks=1 00:30:38.741 --rc geninfo_unexecuted_blocks=1 00:30:38.741 00:30:38.741 ' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.741 --rc genhtml_branch_coverage=1 00:30:38.741 --rc genhtml_function_coverage=1 00:30:38.741 --rc genhtml_legend=1 00:30:38.741 --rc geninfo_all_blocks=1 00:30:38.741 --rc geninfo_unexecuted_blocks=1 00:30:38.741 00:30:38.741 ' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.741 --rc genhtml_branch_coverage=1 00:30:38.741 --rc genhtml_function_coverage=1 00:30:38.741 --rc genhtml_legend=1 00:30:38.741 --rc geninfo_all_blocks=1 00:30:38.741 --rc geninfo_unexecuted_blocks=1 00:30:38.741 00:30:38.741 ' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.741 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.742 15:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.452 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.453 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.453 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:30:45.453 00:30:45.453 --- 10.0.0.2 ping statistics --- 00:30:45.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.453 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:30:45.453 00:30:45.453 --- 10.0.0.1 ping statistics --- 00:30:45.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.453 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:45.453 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2378801 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2378801 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2378801 ']' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 [2024-11-20 15:39:48.495878] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.454 [2024-11-20 15:39:48.496874] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:45.454 [2024-11-20 15:39:48.496913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.454 [2024-11-20 15:39:48.576017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.454 [2024-11-20 15:39:48.618074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.454 [2024-11-20 15:39:48.618112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.454 [2024-11-20 15:39:48.618119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.454 [2024-11-20 15:39:48.618125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.454 [2024-11-20 15:39:48.618130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.454 [2024-11-20 15:39:48.618657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.454 [2024-11-20 15:39:48.684838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.454 [2024-11-20 15:39:48.685073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 [2024-11-20 15:39:48.751396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 [2024-11-20 15:39:48.775575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 malloc0 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:45.454 { 00:30:45.454 "params": { 00:30:45.454 "name": "Nvme$subsystem", 00:30:45.454 "trtype": "$TEST_TRANSPORT", 00:30:45.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.454 "adrfam": "ipv4", 00:30:45.454 "trsvcid": "$NVMF_PORT", 00:30:45.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.454 "hdgst": ${hdgst:-false}, 00:30:45.454 "ddgst": ${ddgst:-false} 00:30:45.454 }, 00:30:45.454 "method": "bdev_nvme_attach_controller" 00:30:45.454 } 00:30:45.454 EOF 00:30:45.454 )") 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:45.454 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:45.455 15:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:45.455 "params": { 00:30:45.455 "name": "Nvme1", 00:30:45.455 "trtype": "tcp", 00:30:45.455 "traddr": "10.0.0.2", 00:30:45.455 "adrfam": "ipv4", 00:30:45.455 "trsvcid": "4420", 00:30:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.455 "hdgst": false, 00:30:45.455 "ddgst": false 00:30:45.455 }, 00:30:45.455 "method": "bdev_nvme_attach_controller" 00:30:45.455 }' 00:30:45.455 [2024-11-20 15:39:48.865630] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:45.455 [2024-11-20 15:39:48.865672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378829 ] 00:30:45.455 [2024-11-20 15:39:48.939289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.455 [2024-11-20 15:39:48.980461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.455 Running I/O for 10 seconds... 00:30:47.771 8176.00 IOPS, 63.88 MiB/s [2024-11-20T14:39:52.614Z] 8262.50 IOPS, 64.55 MiB/s [2024-11-20T14:39:53.550Z] 8315.33 IOPS, 64.96 MiB/s [2024-11-20T14:39:54.484Z] 8340.75 IOPS, 65.16 MiB/s [2024-11-20T14:39:55.419Z] 8357.80 IOPS, 65.30 MiB/s [2024-11-20T14:39:56.356Z] 8361.33 IOPS, 65.32 MiB/s [2024-11-20T14:39:57.731Z] 8375.43 IOPS, 65.43 MiB/s [2024-11-20T14:39:58.667Z] 8380.75 IOPS, 65.47 MiB/s [2024-11-20T14:39:59.603Z] 8385.56 IOPS, 65.51 MiB/s [2024-11-20T14:39:59.603Z] 8391.90 IOPS, 65.56 MiB/s 00:30:55.695 Latency(us) 00:30:55.695 [2024-11-20T14:39:59.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.695 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:55.695 Verification LBA range: start 0x0 length 0x1000 00:30:55.695 Nvme1n1 : 10.01 8395.84 65.59 0.00 0.00 15201.94 1111.26 22111.28 00:30:55.695 [2024-11-20T14:39:59.603Z] =================================================================================================================== 00:30:55.695 [2024-11-20T14:39:59.603Z] Total : 8395.84 65.59 0.00 0.00 15201.94 1111.26 22111.28 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2380501 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.695 { 00:30:55.695 "params": { 00:30:55.695 "name": "Nvme$subsystem", 00:30:55.695 "trtype": "$TEST_TRANSPORT", 00:30:55.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.695 "adrfam": "ipv4", 00:30:55.695 "trsvcid": "$NVMF_PORT", 00:30:55.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.695 "hdgst": ${hdgst:-false}, 00:30:55.695 "ddgst": ${ddgst:-false} 00:30:55.695 }, 00:30:55.695 "method": "bdev_nvme_attach_controller" 00:30:55.695 } 00:30:55.695 EOF 00:30:55.695 )") 00:30:55.695 [2024-11-20 15:39:59.491005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.695 [2024-11-20 15:39:59.491040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:55.695 15:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.695 "params": { 00:30:55.695 "name": "Nvme1", 00:30:55.695 "trtype": "tcp", 00:30:55.695 "traddr": "10.0.0.2", 00:30:55.695 "adrfam": "ipv4", 00:30:55.695 "trsvcid": "4420", 00:30:55.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.695 "hdgst": false, 00:30:55.695 "ddgst": false 00:30:55.695 }, 00:30:55.695 "method": "bdev_nvme_attach_controller" 00:30:55.695 }' 00:30:55.695 [2024-11-20 15:39:59.502966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.695 [2024-11-20 15:39:59.502979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.514966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.514976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.526965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.526975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.530621] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:30:55.696 [2024-11-20 15:39:59.530662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380501 ] 00:30:55.696 [2024-11-20 15:39:59.538963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.538974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.550961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.550970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.562963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.562973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.574963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.574972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.586964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.586990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.696 [2024-11-20 15:39:59.598963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.696 [2024-11-20 15:39:59.598974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.606978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.955 [2024-11-20 15:39:59.610964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.610974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.622964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.622995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.634966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.634975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.646972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.646988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.649345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.955 [2024-11-20 15:39:59.658971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.658983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.670974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.670991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.682970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.682985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.694964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.694995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.706966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.706979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.718963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.718992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.731335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.731354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.742976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.742994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.754971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.754986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.766970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.766985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.778967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.778980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.790960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.790969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.802962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.802971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.814970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.814983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.826962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.826987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.838963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.838972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.955 [2024-11-20 15:39:59.850960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.955 [2024-11-20 15:39:59.850969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.214 [2024-11-20 15:39:59.862966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.214 [2024-11-20 15:39:59.862984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.214 [2024-11-20 15:39:59.874962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.214 [2024-11-20 15:39:59.874971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.214 [2024-11-20 15:39:59.886963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.214 [2024-11-20 15:39:59.886973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.898963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.898974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.910968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.910985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.922969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.922985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 Running I/O for 5 seconds... 00:30:56.215 [2024-11-20 15:39:59.938718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.938738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.953554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.953573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.969023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.969041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.984338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.984356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:39:59.999305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:39:59.999324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.015916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.015937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.034341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.034364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.048929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.048958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.065681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.065701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.079934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.079960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.091524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.091542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.215 [2024-11-20 15:40:00.105289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.215 [2024-11-20 15:40:00.105309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.120924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.120944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.136069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.136088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.151343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.151361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.166560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.166579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.179770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.179789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.195180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.195199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.206126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.206144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.221396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.221414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.236653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.236673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.252524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.252543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.267521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.267540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.283354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.283373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.298737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.298756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.313063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.313081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.328780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.328799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.344045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.344063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.359426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.359444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.474 [2024-11-20 15:40:00.375081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.474 [2024-11-20 15:40:00.375100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.387908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.387926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.399635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.399652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.412855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.412874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.428562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.428581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.443563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.443582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.459406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.459424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.475671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.475690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.491497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.491515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.506938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.506963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.517749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.517767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.533112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.533131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.548618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.548636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.563779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.563797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.579248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.579267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.590639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.590658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.605261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.605278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.620512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.620530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.734 [2024-11-20 15:40:00.635794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.734 [2024-11-20 15:40:00.635812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.993 [2024-11-20 15:40:00.651381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.993 [2024-11-20 15:40:00.651399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.993 [2024-11-20 15:40:00.667682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.667700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.682688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.682707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.695422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.695439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.708960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.708978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.724068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.724085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.739493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.739518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.755222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.755242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.767804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.767823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.780553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.780573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.795794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.795813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.811023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.811042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.822905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.822925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.837591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.837610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.852028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.852046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.867339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.867358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.882868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.882886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.994 [2024-11-20 15:40:00.897371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.994 [2024-11-20 15:40:00.897390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.912545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.912563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 16019.00 IOPS, 125.15 MiB/s [2024-11-20T14:40:01.161Z] [2024-11-20 15:40:00.927626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.927644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.943098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.943116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.956241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.956265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.967013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.967032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.980770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.980789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:00.996393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:00.996412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.010885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.010906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.022227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.022247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.037035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.037054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.052641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.052660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.067970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.067988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.083512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.083530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.099353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.099371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.114867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.114886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.125711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.125729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.140859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.140878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.253 [2024-11-20 15:40:01.156288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.253 [2024-11-20 15:40:01.156307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.171251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.171270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.184249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.184268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.196823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.196842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.211968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.211986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.226700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.226722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.239834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.239852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.254616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.254634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.268602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.268620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.283763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.283781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.299323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.299341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.310890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.310909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.325288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.325306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.340467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.340486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.355761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.355779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.371164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.371182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.511 [2024-11-20 15:40:01.384000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.511 [2024-11-20 15:40:01.384019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.512 [2024-11-20 15:40:01.399132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.512 [2024-11-20 15:40:01.399151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.512 [2024-11-20 15:40:01.410254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.512 [2024-11-20 15:40:01.410272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.425321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.425339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.440278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.440296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.455214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.455233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.467774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.467792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.483034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.483052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.493680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.493703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.508812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.508830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.524486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.524504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.539285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.539302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.551662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.551679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.564669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.564687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.580216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.580234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.595415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.595433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.611131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.611149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.625003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.625020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.640541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.640558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.655683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.655701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.769 [2024-11-20 15:40:01.670991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.769 [2024-11-20 15:40:01.671010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.682301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.682319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.696428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.696446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.711574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.711591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.727085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.727104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.740822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.740840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.755674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.755692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.770766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.770785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.783942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.783967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.799145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.799164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.811111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.811130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.825079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.825098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.840577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.840594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.855683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.855701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.870921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.870939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.881735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.881753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.897361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.897379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.912547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.912566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.036 [2024-11-20 15:40:01.927689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.036 [2024-11-20 15:40:01.927707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 16141.50 IOPS, 126.11 MiB/s [2024-11-20T14:40:02.203Z] [2024-11-20 15:40:01.943093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:01.943111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:01.954853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:01.954871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:01.968841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:01.968858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:01.983918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:01.983936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:01.999061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:01.999079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.012467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.012485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.027997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.028015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.042954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.042973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.055848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.055866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.067516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.067534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.082919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.082937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.094517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.094535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.108958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.108977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.123910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.123928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.138996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.139015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.152003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.152021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.166833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.166853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.178657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.178676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.295 [2024-11-20 15:40:02.192993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.295 [2024-11-20 15:40:02.193012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.208545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.208565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.223846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.223864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.238838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.238858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.252146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.252165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.267314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.267333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.278970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.278990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.292915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.292939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.308496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.308516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.323731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.323751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.338770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.338788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.352599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.352617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.367876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.367894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.383092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.383112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.394702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.394721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.408563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.408582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.423624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.423650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.439478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.439497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.554 [2024-11-20 15:40:02.454645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.554 [2024-11-20 15:40:02.454664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.467694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.467712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.483048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.483067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.496711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.496730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.512122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.512141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.522741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.522761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.537211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.537230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.552374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.552392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.567549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.567572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.582717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.582735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.596464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.596482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.611447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.611466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.622450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.622468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.636874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.636892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.652042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.652061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.667070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.667088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.680972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.812 [2024-11-20 15:40:02.680990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.812 [2024-11-20 15:40:02.696369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.813 [2024-11-20 15:40:02.696388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.813 [2024-11-20 15:40:02.711561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.813 [2024-11-20 15:40:02.711578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.727148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.727167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.737566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.737584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.752741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.752759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.767700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.767718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.783031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.783050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.795200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.795218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.808944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.808972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.824401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.824419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.839582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.839606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.854629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.854648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.868598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.868616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.883828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.883847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.899079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.899098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.913314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.070 [2024-11-20 15:40:02.913332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.070 [2024-11-20 15:40:02.928595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.071 [2024-11-20 15:40:02.928614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.071 16172.33 IOPS, 126.35 MiB/s [2024-11-20T14:40:02.979Z] [2024-11-20 15:40:02.943915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.071 [2024-11-20 15:40:02.943933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.071 [2024-11-20 15:40:02.959078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.071 [2024-11-20 15:40:02.959096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.071 [2024-11-20 15:40:02.969990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.071 [2024-11-20 15:40:02.970008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:02.984699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:02.984723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.000124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.000142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.015083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.015102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.029054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.029072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.044522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.044540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.059715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.059733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.075095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.075113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.088763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.088781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.103389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.103406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.119424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.119446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.134768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.134787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.148643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.148661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.163955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.163973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.179041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.179059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.192575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.192594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.208084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.208103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.328 [2024-11-20 15:40:03.223186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.328 [2024-11-20 15:40:03.223204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.234732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.234750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.249287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.249306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.264465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.264483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.279981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.279999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.294911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.294930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.307579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.307597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.320424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.320441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.335775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.335793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.351119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.351138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.363248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.363266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.376781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.376799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.392035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.392052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.402536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.402554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.417048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.417066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.432367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.432385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.447331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.447349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.462565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.462583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.586 [2024-11-20 15:40:03.477091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.586 [2024-11-20 15:40:03.477109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.492417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.492436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.507195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.507214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.518043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.518061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.533098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.533116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.548079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.548097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.563307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.563325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.576056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.576074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.591481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.591499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.606811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.606830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.621252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.621271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.636746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.636764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.652047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.652068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.667476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.667495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.683433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.683451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.698739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.698758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.713134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.713152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.727869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.727890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.845 [2024-11-20 15:40:03.743458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.845 [2024-11-20 15:40:03.743477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.759885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.759905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.775414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.775432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.786632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.786651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.801278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.801297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.816930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.816958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.831858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.831876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.847344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.847362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.863040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.863058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.874815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.874835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.888830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.888849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.904271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.904290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.919498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.919517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 [2024-11-20 15:40:03.934662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.934681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.103 16201.50 IOPS, 126.57 MiB/s [2024-11-20T14:40:04.011Z] [2024-11-20 15:40:03.948008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.103 [2024-11-20 15:40:03.948027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.104 [2024-11-20 15:40:03.963307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.104 [2024-11-20 15:40:03.963324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.104 [2024-11-20 15:40:03.978750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.104 [2024-11-20 15:40:03.978768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.104 [2024-11-20 15:40:03.992825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.104 [2024-11-20 15:40:03.992844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.104 [2024-11-20 15:40:04.008090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.104 [2024-11-20 15:40:04.008110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.023417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.023434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.038941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.038965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.050703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.050721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.065081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.065099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.080657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.080675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.096407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.096425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.110943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.110968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.122631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.122649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.137240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.137258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.151757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.151774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.167249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.167267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.179739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.179756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.191323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.191341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.204379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.362 [2024-11-20 15:40:04.204402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.362 [2024-11-20 15:40:04.219806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.363 [2024-11-20 15:40:04.219825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.363 [2024-11-20 15:40:04.235011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.363 [2024-11-20 15:40:04.235029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.363 [2024-11-20 15:40:04.248344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.363 [2024-11-20 15:40:04.248361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.363 [2024-11-20 15:40:04.263975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.363 [2024-11-20 15:40:04.263993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.279209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.279228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.291048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.291066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.305430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.305448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.320774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.320792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.336123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.336141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.621 [2024-11-20 15:40:04.350790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.621 [2024-11-20 15:40:04.350808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.364705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.364723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.379938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.379962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.395735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.395752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.411163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.411181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.422467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.422485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.436919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.436937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.452048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.452066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.466965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.466982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.479960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.479985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.495543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.495561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.511526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.622 [2024-11-20 15:40:04.511545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.622 [2024-11-20 15:40:04.526906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.526925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.538852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.538870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.552778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.552796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.567921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.567939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.582902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.582920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.595731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.595748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.611369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.611387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.623445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.623462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.639784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.639803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.654934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.654959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.668830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.668848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.684316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.684334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.699762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.699780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.715374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.715398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.731257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.731275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.743123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.743142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.756703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.756726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:00.880 [2024-11-20 15:40:04.771768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:00.880 [2024-11-20 15:40:04.771785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.786812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.786830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.801125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.801143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.816430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.816448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.831555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.831573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.842866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.842884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.856984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.857002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.872127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.872145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.887156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.887174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.900645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.900663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.916196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.916213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.930808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.930826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 16207.00 IOPS, 126.62 MiB/s [2024-11-20T14:40:05.047Z] [2024-11-20 15:40:04.941600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.941618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 00:31:01.139 Latency(us) 00:31:01.139 [2024-11-20T14:40:05.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.139 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:01.139 Nvme1n1 : 5.01 16207.77 126.62 0.00 0.00 7889.23 2322.25 16298.52 00:31:01.139 [2024-11-20T14:40:05.047Z] =================================================================================================================== 00:31:01.139 [2024-11-20T14:40:05.047Z] Total : 16207.77 126.62 0.00 0.00 7889.23 2322.25 16298.52 00:31:01.139 [2024-11-20 15:40:04.950966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.139 [2024-11-20 15:40:04.950983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.139 [2024-11-20 15:40:04.962968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:04.962982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:04.974978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:04.974997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:04.986968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:04.986984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:04.998969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:04.998983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:05.010964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:05.010977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:05.022967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:05.022985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.140 [2024-11-20 15:40:05.034966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.140 [2024-11-20 15:40:05.034982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.398 [2024-11-20 15:40:05.046965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.398 [2024-11-20 15:40:05.046978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.398 [2024-11-20 15:40:05.058963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.398 [2024-11-20 15:40:05.058984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.399 [2024-11-20 15:40:05.070969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.399 [2024-11-20 15:40:05.070983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.399 [2024-11-20 15:40:05.082962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.399 [2024-11-20 15:40:05.082976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.399 [2024-11-20 15:40:05.094960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:01.399 [2024-11-20 15:40:05.094987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:01.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2380501) - No such process 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2380501 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.399 delay0 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.399 15:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:01.399 [2024-11-20 15:40:05.199999] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:09.509 Initializing NVMe Controllers 00:31:09.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.509 Initialization complete. Launching workers. 00:31:09.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 27838 00:31:09.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27958, failed to submit 119 00:31:09.509 success 27873, unsuccessful 85, failed 0 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.509 rmmod nvme_tcp 00:31:09.509 rmmod nvme_fabrics 00:31:09.509 rmmod nvme_keyring 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2378801 ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2378801 ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378801' 00:31:09.509 killing process with pid 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2378801 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.509 15:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.887 00:31:10.887 real 0m32.364s 00:31:10.887 user 0m41.509s 00:31:10.887 sys 0m13.349s 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.887 ************************************ 00:31:10.887 END TEST nvmf_zcopy 00:31:10.887 ************************************ 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.887 ************************************ 00:31:10.887 START TEST nvmf_nmic 00:31:10.887 ************************************ 00:31:10.887 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:11.146 * Looking for test storage... 00:31:11.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:11.146 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.147 --rc genhtml_branch_coverage=1 00:31:11.147 --rc genhtml_function_coverage=1 00:31:11.147 --rc genhtml_legend=1 00:31:11.147 --rc geninfo_all_blocks=1 00:31:11.147 --rc geninfo_unexecuted_blocks=1 00:31:11.147 00:31:11.147 ' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.147 --rc genhtml_branch_coverage=1 00:31:11.147 --rc genhtml_function_coverage=1 00:31:11.147 --rc genhtml_legend=1 00:31:11.147 --rc geninfo_all_blocks=1 00:31:11.147 --rc geninfo_unexecuted_blocks=1 00:31:11.147 00:31:11.147 ' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.147 --rc genhtml_branch_coverage=1 00:31:11.147 --rc genhtml_function_coverage=1 00:31:11.147 --rc genhtml_legend=1 00:31:11.147 --rc geninfo_all_blocks=1 00:31:11.147 --rc geninfo_unexecuted_blocks=1 00:31:11.147 00:31:11.147 ' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:11.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.147 --rc genhtml_branch_coverage=1 00:31:11.147 --rc genhtml_function_coverage=1 00:31:11.147 --rc genhtml_legend=1 00:31:11.147 --rc geninfo_all_blocks=1 00:31:11.147 --rc geninfo_unexecuted_blocks=1 00:31:11.147 00:31:11.147 ' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.147 15:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:17.716 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:17.717 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:17.717 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:17.717 Found net devices under 0000:86:00.0: cvl_0_0 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:17.717 Found net devices under 0000:86:00.1: cvl_0_1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.717 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:31:17.718 00:31:17.718 --- 10.0.0.2 ping statistics --- 00:31:17.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.718 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:17.718 00:31:17.718 --- 10.0.0.1 ping statistics --- 00:31:17.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.718 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2386013 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2386013 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2386013 ']' 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.718 15:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 [2024-11-20 15:40:20.981640] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.718 [2024-11-20 15:40:20.982600] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:31:17.718 [2024-11-20 15:40:20.982634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.718 [2024-11-20 15:40:21.061816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.718 [2024-11-20 15:40:21.105766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.718 [2024-11-20 15:40:21.105804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.718 [2024-11-20 15:40:21.105812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.718 [2024-11-20 15:40:21.105818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.718 [2024-11-20 15:40:21.105824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.718 [2024-11-20 15:40:21.107399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.718 [2024-11-20 15:40:21.107510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.718 [2024-11-20 15:40:21.107642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.718 [2024-11-20 15:40:21.107644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.718 [2024-11-20 15:40:21.175052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.718 [2024-11-20 15:40:21.175174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.718 [2024-11-20 15:40:21.175830] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.718 [2024-11-20 15:40:21.176017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.718 [2024-11-20 15:40:21.176105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 [2024-11-20 15:40:21.244376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 Malloc0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 [2024-11-20 15:40:21.328554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:17.718 test case1: single bdev can't be used in multiple subsystems 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.718 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.718 [2024-11-20 15:40:21.360015] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:17.718 [2024-11-20 15:40:21.360035] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:17.718 [2024-11-20 15:40:21.360043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.719 request: 00:31:17.719 { 00:31:17.719 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.719 "namespace": { 00:31:17.719 "bdev_name": "Malloc0", 00:31:17.719 "no_auto_visible": false 00:31:17.719 }, 00:31:17.719 "method": "nvmf_subsystem_add_ns", 00:31:17.719 "req_id": 1 00:31:17.719 } 00:31:17.719 Got JSON-RPC error response 00:31:17.719 response: 00:31:17.719 { 00:31:17.719 "code": -32602, 00:31:17.719 "message": "Invalid parameters" 00:31:17.719 } 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:17.719 Adding namespace failed - expected result. 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:17.719 test case2: host connect to nvmf target in multiple paths 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:17.719 [2024-11-20 15:40:21.372106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.719 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:17.977 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:18.234 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:18.234 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:18.234 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:18.234 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:18.234 15:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:20.130 15:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:20.130 [global] 00:31:20.130 thread=1 00:31:20.130 invalidate=1 00:31:20.130 rw=write 00:31:20.130 time_based=1 00:31:20.130 runtime=1 00:31:20.130 ioengine=libaio 00:31:20.130 direct=1 00:31:20.130 bs=4096 00:31:20.130 iodepth=1 00:31:20.130 norandommap=0 00:31:20.130 numjobs=1 00:31:20.130 00:31:20.130 verify_dump=1 00:31:20.130 verify_backlog=512 00:31:20.130 verify_state_save=0 00:31:20.130 do_verify=1 00:31:20.130 verify=crc32c-intel 00:31:20.130 [job0] 00:31:20.130 filename=/dev/nvme0n1 00:31:20.130 Could not set queue depth (nvme0n1) 00:31:20.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:20.387 fio-3.35 00:31:20.387 Starting 1 thread 00:31:21.760 00:31:21.760 job0: (groupid=0, jobs=1): err= 0: pid=2386754: Wed Nov 20 15:40:25 2024 00:31:21.760 read: IOPS=962, BW=3849KiB/s (3941kB/s)(3872KiB/1006msec) 00:31:21.760 slat (nsec): min=7190, max=38573, avg=8640.77, stdev=2569.11 00:31:21.760 clat (usec): min=186, max=41014, avg=834.27, stdev=5034.09 00:31:21.760 lat (usec): min=194, max=41037, avg=842.91, stdev=5035.78 00:31:21.760 clat percentiles (usec): 00:31:21.760 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:31:21.760 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 00:31:21.760 | 70.00th=[ 206], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 241], 00:31:21.760 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:21.760 | 99.99th=[41157] 00:31:21.760 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:31:21.760 slat (usec): min=10, max=26372, avg=37.53, stdev=823.77 00:31:21.760 clat (usec): min=122, max=328, avg=140.77, stdev=12.07 00:31:21.760 lat (usec): min=140, max=26581, avg=178.29, stdev=825.99 00:31:21.760 clat percentiles (usec): 00:31:21.760 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:31:21.760 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:31:21.760 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:31:21.760 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 310], 99.95th=[ 330], 00:31:21.760 | 99.99th=[ 330] 00:31:21.760 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:31:21.760 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:21.760 lat (usec) : 250=98.69%, 500=0.55% 00:31:21.760 lat (msec) : 50=0.75% 00:31:21.760 cpu : usr=1.49%, sys=3.38%, ctx=1994, majf=0, minf=1 00:31:21.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.760 issued rwts: total=968,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:21.760 00:31:21.760 Run status group 0 (all jobs): 00:31:21.760 READ: bw=3849KiB/s (3941kB/s), 3849KiB/s-3849KiB/s (3941kB/s-3941kB/s), io=3872KiB (3965kB), run=1006-1006msec 00:31:21.760 WRITE: bw=4072KiB/s (4169kB/s), 4072KiB/s-4072KiB/s (4169kB/s-4169kB/s), io=4096KiB (4194kB), run=1006-1006msec 00:31:21.760 00:31:21.760 Disk stats (read/write): 00:31:21.760 nvme0n1: ios=991/1024, merge=0/0, ticks=1652/128, in_queue=1780, util=98.60% 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:21.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.760 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.760 rmmod nvme_tcp 00:31:21.760 rmmod nvme_fabrics 00:31:21.760 rmmod nvme_keyring 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2386013 ']' 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2386013 ']' 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386013' 00:31:22.019 killing process with pid 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2386013 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:22.019 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.277 15:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.179 15:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.179 00:31:24.179 real 0m13.231s 00:31:24.179 user 0m24.319s 00:31:24.179 sys 0m6.249s 00:31:24.179 15:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.179 15:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:24.179 ************************************ 00:31:24.179 END TEST nvmf_nmic 00:31:24.179 ************************************ 00:31:24.179 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:24.179 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.179 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.179 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.179 ************************************ 00:31:24.179 START TEST nvmf_fio_target 00:31:24.179 ************************************ 00:31:24.179 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:24.439 * Looking for test storage... 00:31:24.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:24.439 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.440 --rc genhtml_branch_coverage=1 00:31:24.440 --rc genhtml_function_coverage=1 00:31:24.440 --rc genhtml_legend=1 00:31:24.440 --rc geninfo_all_blocks=1 00:31:24.440 --rc geninfo_unexecuted_blocks=1 00:31:24.440 00:31:24.440 ' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.440 --rc genhtml_branch_coverage=1 00:31:24.440 --rc genhtml_function_coverage=1 00:31:24.440 --rc genhtml_legend=1 00:31:24.440 --rc geninfo_all_blocks=1 00:31:24.440 --rc geninfo_unexecuted_blocks=1 00:31:24.440 00:31:24.440 ' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.440 --rc genhtml_branch_coverage=1 00:31:24.440 --rc genhtml_function_coverage=1 00:31:24.440 --rc genhtml_legend=1 00:31:24.440 --rc geninfo_all_blocks=1 00:31:24.440 --rc geninfo_unexecuted_blocks=1 00:31:24.440 00:31:24.440 ' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.440 --rc genhtml_branch_coverage=1 00:31:24.440 --rc genhtml_function_coverage=1 00:31:24.440 --rc genhtml_legend=1 00:31:24.440 --rc geninfo_all_blocks=1 00:31:24.440 --rc geninfo_unexecuted_blocks=1 00:31:24.440 00:31:24.440 ' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.440 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.441 15:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.013 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:31.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:31.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:31.014 Found net devices under 0000:86:00.0: cvl_0_0 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:31.014 Found net devices under 0000:86:00.1: cvl_0_1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.014 15:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:31:31.014 00:31:31.014 --- 10.0.0.2 ping statistics --- 00:31:31.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.014 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:31:31.014 00:31:31.014 --- 10.0.0.1 ping statistics --- 00:31:31.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.014 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2390380 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2390380 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2390380 ']' 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.014 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.015 15:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.015 [2024-11-20 15:40:34.192310] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:31.015 [2024-11-20 15:40:34.193309] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:31:31.015 [2024-11-20 15:40:34.193349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.015 [2024-11-20 15:40:34.273301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.015 [2024-11-20 15:40:34.317555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.015 [2024-11-20 15:40:34.317595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.015 [2024-11-20 15:40:34.317606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.015 [2024-11-20 15:40:34.317613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.015 [2024-11-20 15:40:34.317618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.015 [2024-11-20 15:40:34.319228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.015 [2024-11-20 15:40:34.319336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.015 [2024-11-20 15:40:34.321965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.015 [2024-11-20 15:40:34.321969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.015 [2024-11-20 15:40:34.388932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:31.015 [2024-11-20 15:40:34.389374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:31.015 [2024-11-20 15:40:34.389532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:31.015 [2024-11-20 15:40:34.389691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:31.015 [2024-11-20 15:40:34.389788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.274 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:31.532 [2024-11-20 15:40:35.246696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.532 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.790 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:31.790 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.049 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:32.049 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.049 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:32.049 15:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.307 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:32.307 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:32.566 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.824 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:32.824 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.824 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:32.824 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:33.083 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:33.083 15:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:33.342 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:33.600 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:33.600 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:33.600 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:33.600 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.857 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.115 [2024-11-20 15:40:37.862607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.115 15:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:34.373 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:34.631 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:34.889 15:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:36.795 15:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:36.795 [global] 00:31:36.795 thread=1 00:31:36.795 invalidate=1 00:31:36.795 rw=write 00:31:36.795 time_based=1 00:31:36.795 runtime=1 00:31:36.795 ioengine=libaio 00:31:36.795 direct=1 00:31:36.795 bs=4096 00:31:36.795 iodepth=1 00:31:36.795 norandommap=0 00:31:36.795 numjobs=1 00:31:36.795 00:31:36.795 verify_dump=1 00:31:36.795 verify_backlog=512 00:31:36.795 verify_state_save=0 00:31:36.795 do_verify=1 00:31:36.795 verify=crc32c-intel 00:31:36.795 [job0] 00:31:36.795 filename=/dev/nvme0n1 00:31:36.795 [job1] 00:31:36.795 filename=/dev/nvme0n2 00:31:36.795 [job2] 00:31:36.795 filename=/dev/nvme0n3 00:31:36.795 [job3] 00:31:36.795 filename=/dev/nvme0n4 00:31:37.128 Could not set queue depth (nvme0n1) 00:31:37.128 Could not set queue depth (nvme0n2) 00:31:37.128 Could not set queue depth (nvme0n3) 00:31:37.128 Could not set queue depth (nvme0n4) 00:31:37.128 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.128 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.128 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.128 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.128 fio-3.35 00:31:37.128 Starting 4 threads 00:31:38.516 00:31:38.516 job0: (groupid=0, jobs=1): err= 0: pid=2391711: Wed Nov 20 15:40:42 2024 00:31:38.516 read: IOPS=121, BW=487KiB/s (499kB/s)(496KiB/1018msec) 00:31:38.516 slat (nsec): min=6370, max=24078, avg=9613.53, stdev=5472.55 00:31:38.516 clat (usec): min=193, max=41980, avg=7454.58, stdev=15629.07 00:31:38.516 lat (usec): min=201, max=42002, avg=7464.19, stdev=15632.13 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:31:38.516 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 233], 00:31:38.516 | 70.00th=[ 245], 80.00th=[ 318], 90.00th=[41157], 95.00th=[41157], 00:31:38.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:38.516 | 99.99th=[42206] 00:31:38.516 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:31:38.516 slat (nsec): min=9337, max=40920, avg=10560.62, stdev=1851.56 00:31:38.516 clat (usec): min=141, max=339, avg=167.11, stdev=14.17 00:31:38.516 lat (usec): min=151, max=380, avg=177.67, stdev=15.04 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:31:38.516 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:31:38.516 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:31:38.516 | 99.00th=[ 206], 99.50th=[ 239], 99.90th=[ 338], 99.95th=[ 338], 00:31:38.516 | 99.99th=[ 338] 00:31:38.516 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.516 lat (usec) : 250=94.18%, 500=2.36% 00:31:38.516 lat (msec) : 50=3.46% 00:31:38.516 cpu : usr=0.00%, sys=0.98%, ctx=636, majf=0, minf=1 00:31:38.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 issued rwts: total=124,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.516 job1: (groupid=0, jobs=1): err= 0: pid=2391713: Wed Nov 20 15:40:42 2024 00:31:38.516 read: IOPS=2302, BW=9211KiB/s (9432kB/s)(9220KiB/1001msec) 00:31:38.516 slat (nsec): min=6043, max=25869, avg=6992.38, stdev=963.58 00:31:38.516 clat (usec): min=186, max=40770, avg=246.37, stdev=845.21 00:31:38.516 lat (usec): min=193, max=40777, avg=253.36, stdev=845.20 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:31:38.516 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 245], 00:31:38.516 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:31:38.516 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 416], 99.95th=[ 420], 00:31:38.516 | 99.99th=[40633] 00:31:38.516 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:38.516 slat (nsec): min=8912, max=39224, avg=9915.96, stdev=1061.79 00:31:38.516 clat (usec): min=118, max=397, avg=148.79, stdev=23.37 00:31:38.516 lat (usec): min=128, max=436, avg=158.70, stdev=23.60 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 130], 00:31:38.516 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 151], 00:31:38.516 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:31:38.516 | 99.00th=[ 204], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 258], 00:31:38.516 | 99.99th=[ 396] 00:31:38.516 bw ( KiB/s): min= 9216, max= 9216, per=57.49%, avg=9216.00, stdev= 0.00, samples=1 00:31:38.516 iops : min= 2304, max= 2304, avg=2304.00, stdev= 0.00, samples=1 00:31:38.516 lat (usec) : 250=87.42%, 500=12.56% 00:31:38.516 lat (msec) : 50=0.02% 00:31:38.516 cpu : usr=2.70%, sys=3.90%, ctx=4865, majf=0, minf=1 00:31:38.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.516 job2: (groupid=0, jobs=1): err= 0: pid=2391714: Wed Nov 20 15:40:42 2024 00:31:38.516 read: IOPS=45, BW=182KiB/s (186kB/s)(184KiB/1013msec) 00:31:38.516 slat (nsec): min=6775, max=24122, avg=14501.87, stdev=7597.03 00:31:38.516 clat (usec): min=200, max=41964, avg=19787.22, stdev=20622.64 00:31:38.516 lat (usec): min=207, max=41987, avg=19801.73, stdev=20629.64 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 231], 00:31:38.516 | 30.00th=[ 260], 40.00th=[ 302], 50.00th=[ 326], 60.00th=[41157], 00:31:38.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:38.516 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:38.516 | 99.99th=[42206] 00:31:38.516 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:38.516 slat (nsec): min=9506, max=36386, avg=11108.75, stdev=2308.29 00:31:38.516 clat (usec): min=150, max=343, avg=185.89, stdev=23.91 00:31:38.516 lat (usec): min=161, max=379, avg=197.00, stdev=24.33 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:31:38.516 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:31:38.516 | 70.00th=[ 192], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 231], 00:31:38.516 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 343], 99.95th=[ 343], 00:31:38.516 | 99.99th=[ 343] 00:31:38.516 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.516 lat (usec) : 250=92.65%, 500=3.41% 00:31:38.516 lat (msec) : 50=3.94% 00:31:38.516 cpu : usr=0.20%, sys=0.69%, ctx=558, majf=0, minf=1 00:31:38.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 issued rwts: total=46,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.516 job3: (groupid=0, jobs=1): err= 0: pid=2391715: Wed Nov 20 15:40:42 2024 00:31:38.516 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:31:38.516 slat (nsec): min=10844, max=27799, avg=21910.95, stdev=2772.64 00:31:38.516 clat (usec): min=40791, max=41182, avg=40970.34, stdev=92.64 00:31:38.516 lat (usec): min=40813, max=41203, avg=40992.25, stdev=92.12 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:31:38.516 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.516 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.516 | 99.99th=[41157] 00:31:38.516 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:31:38.516 slat (nsec): min=10376, max=41954, avg=13573.87, stdev=2929.98 00:31:38.516 clat (usec): min=166, max=345, avg=217.44, stdev=27.60 00:31:38.516 lat (usec): min=179, max=380, avg=231.02, stdev=27.58 00:31:38.516 clat percentiles (usec): 00:31:38.516 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:31:38.516 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:31:38.516 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 260], 00:31:38.516 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 347], 00:31:38.516 | 99.99th=[ 347] 00:31:38.516 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.516 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.516 lat (usec) : 250=88.76%, 500=7.12% 00:31:38.516 lat (msec) : 50=4.12% 00:31:38.516 cpu : usr=0.39%, sys=1.08%, ctx=534, majf=0, minf=1 00:31:38.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.516 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.516 00:31:38.516 Run status group 0 (all jobs): 00:31:38.516 READ: bw=9773KiB/s (10.0MB/s), 86.1KiB/s-9211KiB/s (88.2kB/s-9432kB/s), io=9988KiB (10.2MB), run=1001-1022msec 00:31:38.517 WRITE: bw=15.7MiB/s (16.4MB/s), 2004KiB/s-9.99MiB/s (2052kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1022msec 00:31:38.517 00:31:38.517 Disk stats (read/write): 00:31:38.517 nvme0n1: ios=164/512, merge=0/0, ticks=934/86, in_queue=1020, util=90.88% 00:31:38.517 nvme0n2: ios=2014/2048, merge=0/0, ticks=762/306, in_queue=1068, util=95.53% 00:31:38.517 nvme0n3: ios=42/512, merge=0/0, ticks=746/90, in_queue=836, util=88.96% 00:31:38.517 nvme0n4: ios=22/512, merge=0/0, ticks=902/113, in_queue=1015, util=90.97% 00:31:38.517 15:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:38.517 [global] 00:31:38.517 thread=1 00:31:38.517 invalidate=1 00:31:38.517 rw=randwrite 00:31:38.517 time_based=1 00:31:38.517 runtime=1 00:31:38.517 ioengine=libaio 00:31:38.517 direct=1 00:31:38.517 bs=4096 00:31:38.517 iodepth=1 00:31:38.517 norandommap=0 00:31:38.517 numjobs=1 00:31:38.517 00:31:38.517 verify_dump=1 00:31:38.517 verify_backlog=512 00:31:38.517 verify_state_save=0 00:31:38.517 do_verify=1 00:31:38.517 verify=crc32c-intel 00:31:38.517 [job0] 00:31:38.517 filename=/dev/nvme0n1 00:31:38.517 [job1] 00:31:38.517 filename=/dev/nvme0n2 00:31:38.517 [job2] 00:31:38.517 filename=/dev/nvme0n3 00:31:38.517 [job3] 00:31:38.517 filename=/dev/nvme0n4 00:31:38.517 Could not set queue depth (nvme0n1) 00:31:38.517 Could not set queue depth (nvme0n2) 00:31:38.517 Could not set queue depth (nvme0n3) 00:31:38.517 Could not set queue depth (nvme0n4) 00:31:38.775 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.775 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.775 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.775 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.775 fio-3.35 00:31:38.775 Starting 4 threads 00:31:40.155 00:31:40.155 job0: (groupid=0, jobs=1): err= 0: pid=2392093: Wed Nov 20 15:40:43 2024 00:31:40.155 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:40.155 slat (nsec): min=4997, max=70605, avg=8989.64, stdev=3791.52 00:31:40.155 clat (usec): min=192, max=41187, avg=400.90, stdev=2230.53 00:31:40.155 lat (usec): min=200, max=41196, avg=409.89, stdev=2230.49 00:31:40.155 clat percentiles (usec): 00:31:40.155 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:31:40.155 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 251], 00:31:40.155 | 70.00th=[ 269], 80.00th=[ 318], 90.00th=[ 429], 95.00th=[ 486], 00:31:40.155 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[40633], 99.95th=[41157], 00:31:40.155 | 99.99th=[41157] 00:31:40.155 write: IOPS=1811, BW=7245KiB/s (7419kB/s)(7252KiB/1001msec); 0 zone resets 00:31:40.155 slat (nsec): min=9265, max=47770, avg=11884.96, stdev=3232.29 00:31:40.155 clat (usec): min=128, max=462, avg=186.88, stdev=33.60 00:31:40.155 lat (usec): min=146, max=475, avg=198.77, stdev=34.35 00:31:40.155 clat percentiles (usec): 00:31:40.155 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:31:40.155 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 184], 00:31:40.155 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 237], 95.00th=[ 262], 00:31:40.155 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 461], 00:31:40.155 | 99.99th=[ 461] 00:31:40.155 bw ( KiB/s): min= 8192, max= 8192, per=24.45%, avg=8192.00, stdev= 0.00, samples=1 00:31:40.155 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:40.155 lat (usec) : 250=77.75%, 500=20.87%, 750=1.22% 00:31:40.155 lat (msec) : 50=0.15% 00:31:40.155 cpu : usr=2.10%, sys=5.40%, ctx=3352, majf=0, minf=1 00:31:40.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.155 issued rwts: total=1536,1813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.155 job1: (groupid=0, jobs=1): err= 0: pid=2392094: Wed Nov 20 15:40:43 2024 00:31:40.155 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:40.156 slat (nsec): min=6608, max=42991, avg=7938.06, stdev=1703.72 00:31:40.156 clat (usec): min=201, max=41276, avg=384.64, stdev=2080.48 00:31:40.156 lat (usec): min=208, max=41284, avg=392.58, stdev=2080.51 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:31:40.156 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 285], 00:31:40.156 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 334], 00:31:40.156 | 99.00th=[ 486], 99.50th=[ 515], 99.90th=[41157], 99.95th=[41157], 00:31:40.156 | 99.99th=[41157] 00:31:40.156 write: IOPS=2042, BW=8172KiB/s (8368kB/s)(8180KiB/1001msec); 0 zone resets 00:31:40.156 slat (nsec): min=5846, max=39324, avg=10578.92, stdev=1570.09 00:31:40.156 clat (usec): min=130, max=364, avg=179.92, stdev=22.36 00:31:40.156 lat (usec): min=141, max=402, avg=190.50, stdev=22.59 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:31:40.156 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:31:40.156 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 223], 00:31:40.156 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 363], 00:31:40.156 | 99.99th=[ 367] 00:31:40.156 bw ( KiB/s): min= 8192, max= 8192, per=24.45%, avg=8192.00, stdev= 0.00, samples=1 00:31:40.156 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:40.156 lat (usec) : 250=62.78%, 500=37.00%, 750=0.11% 00:31:40.156 lat (msec) : 50=0.11% 00:31:40.156 cpu : usr=2.00%, sys=3.20%, ctx=3583, majf=0, minf=1 00:31:40.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 issued rwts: total=1536,2045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.156 job2: (groupid=0, jobs=1): err= 0: pid=2392095: Wed Nov 20 15:40:43 2024 00:31:40.156 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:40.156 slat (nsec): min=6674, max=29817, avg=8419.24, stdev=2284.51 00:31:40.156 clat (usec): min=199, max=665, avg=258.40, stdev=37.98 00:31:40.156 lat (usec): min=206, max=673, avg=266.82, stdev=38.09 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:31:40.156 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:31:40.156 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 334], 00:31:40.156 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 619], 99.95th=[ 627], 00:31:40.156 | 99.99th=[ 668] 00:31:40.156 write: IOPS=2285, BW=9143KiB/s (9362kB/s)(9152KiB/1001msec); 0 zone resets 00:31:40.156 slat (nsec): min=5516, max=65851, avg=11401.57, stdev=3862.89 00:31:40.156 clat (usec): min=124, max=409, avg=182.02, stdev=24.20 00:31:40.156 lat (usec): min=150, max=434, avg=193.42, stdev=24.30 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:31:40.156 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:31:40.156 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 227], 00:31:40.156 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 400], 00:31:40.156 | 99.99th=[ 408] 00:31:40.156 bw ( KiB/s): min= 8752, max= 8752, per=26.13%, avg=8752.00, stdev= 0.00, samples=1 00:31:40.156 iops : min= 2188, max= 2188, avg=2188.00, stdev= 0.00, samples=1 00:31:40.156 lat (usec) : 250=76.31%, 500=23.57%, 750=0.12% 00:31:40.156 cpu : usr=2.20%, sys=4.40%, ctx=4337, majf=0, minf=1 00:31:40.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 issued rwts: total=2048,2288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.156 job3: (groupid=0, jobs=1): err= 0: pid=2392097: Wed Nov 20 15:40:43 2024 00:31:40.156 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:40.156 slat (nsec): min=5143, max=23760, avg=9202.58, stdev=1320.37 00:31:40.156 clat (usec): min=202, max=41210, avg=255.16, stdev=912.55 00:31:40.156 lat (usec): min=211, max=41217, avg=264.36, stdev=912.50 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:31:40.156 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:31:40.156 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 262], 00:31:40.156 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 1418], 99.95th=[ 5145], 00:31:40.156 | 99.99th=[41157] 00:31:40.156 write: IOPS=2234, BW=8939KiB/s (9154kB/s)(8948KiB/1001msec); 0 zone resets 00:31:40.156 slat (nsec): min=5312, max=38043, avg=12836.87, stdev=2659.36 00:31:40.156 clat (usec): min=148, max=433, avg=185.67, stdev=23.77 00:31:40.156 lat (usec): min=154, max=446, avg=198.51, stdev=23.82 00:31:40.156 clat percentiles (usec): 00:31:40.156 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:31:40.156 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:31:40.156 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 233], 00:31:40.156 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 355], 00:31:40.156 | 99.99th=[ 433] 00:31:40.156 bw ( KiB/s): min= 9536, max= 9536, per=28.47%, avg=9536.00, stdev= 0.00, samples=1 00:31:40.156 iops : min= 2384, max= 2384, avg=2384.00, stdev= 0.00, samples=1 00:31:40.156 lat (usec) : 250=93.91%, 500=6.00%, 750=0.02% 00:31:40.156 lat (msec) : 2=0.02%, 10=0.02%, 50=0.02% 00:31:40.156 cpu : usr=4.30%, sys=6.70%, ctx=4286, majf=0, minf=1 00:31:40.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.156 issued rwts: total=2048,2237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:40.156 00:31:40.156 Run status group 0 (all jobs): 00:31:40.156 READ: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:31:40.156 WRITE: bw=32.7MiB/s (34.3MB/s), 7245KiB/s-9143KiB/s (7419kB/s-9362kB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:31:40.156 00:31:40.156 Disk stats (read/write): 00:31:40.156 nvme0n1: ios=1421/1536, merge=0/0, ticks=1247/262, in_queue=1509, util=96.79% 00:31:40.156 nvme0n2: ios=1411/1536, merge=0/0, ticks=1533/264, in_queue=1797, util=98.88% 00:31:40.156 nvme0n3: ios=1694/2048, merge=0/0, ticks=905/354, in_queue=1259, util=99.90% 00:31:40.156 nvme0n4: ios=1861/2048, merge=0/0, ticks=1124/360, in_queue=1484, util=96.96% 00:31:40.156 15:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:40.156 [global] 00:31:40.156 thread=1 00:31:40.156 invalidate=1 00:31:40.156 rw=write 00:31:40.156 time_based=1 00:31:40.156 runtime=1 00:31:40.156 ioengine=libaio 00:31:40.156 direct=1 00:31:40.156 bs=4096 00:31:40.156 iodepth=128 00:31:40.156 norandommap=0 00:31:40.156 numjobs=1 00:31:40.156 00:31:40.156 verify_dump=1 00:31:40.156 verify_backlog=512 00:31:40.156 verify_state_save=0 00:31:40.156 do_verify=1 00:31:40.156 verify=crc32c-intel 00:31:40.156 [job0] 00:31:40.156 filename=/dev/nvme0n1 00:31:40.156 [job1] 00:31:40.156 filename=/dev/nvme0n2 00:31:40.156 [job2] 00:31:40.156 filename=/dev/nvme0n3 00:31:40.156 [job3] 00:31:40.156 filename=/dev/nvme0n4 00:31:40.156 Could not set queue depth (nvme0n1) 00:31:40.156 Could not set queue depth (nvme0n2) 00:31:40.156 Could not set queue depth (nvme0n3) 00:31:40.156 Could not set queue depth (nvme0n4) 00:31:40.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.413 fio-3.35 00:31:40.413 Starting 4 threads 00:31:41.783 00:31:41.783 job0: (groupid=0, jobs=1): err= 0: pid=2392464: Wed Nov 20 15:40:45 2024 00:31:41.783 read: IOPS=4587, BW=17.9MiB/s (18.8MB/s)(18.7MiB/1045msec) 00:31:41.783 slat (nsec): min=1061, max=21903k, avg=107727.97, stdev=759667.00 00:31:41.783 clat (usec): min=3804, max=59339, avg=15493.15, stdev=10832.68 00:31:41.783 lat (usec): min=3806, max=68019, avg=15600.88, stdev=10875.76 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 9372], 00:31:41.783 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[12518], 00:31:41.783 | 70.00th=[16581], 80.00th=[20579], 90.00th=[28967], 95.00th=[41681], 00:31:41.783 | 99.00th=[53216], 99.50th=[58983], 99.90th=[58983], 99.95th=[59507], 00:31:41.783 | 99.99th=[59507] 00:31:41.783 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:31:41.783 slat (nsec): min=1848, max=19551k, avg=89230.38, stdev=612761.59 00:31:41.783 clat (usec): min=2762, max=52470, avg=11341.57, stdev=5433.95 00:31:41.783 lat (usec): min=2768, max=52481, avg=11430.80, stdev=5488.02 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 3490], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 6456], 00:31:41.783 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[10683], 60.00th=[11469], 00:31:41.783 | 70.00th=[12256], 80.00th=[14222], 90.00th=[15926], 95.00th=[21103], 00:31:41.783 | 99.00th=[36439], 99.50th=[40109], 99.90th=[52691], 99.95th=[52691], 00:31:41.783 | 99.99th=[52691] 00:31:41.783 bw ( KiB/s): min=16384, max=24576, per=29.46%, avg=20480.00, stdev=5792.62, samples=2 00:31:41.783 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:31:41.783 lat (msec) : 4=1.08%, 10=30.27%, 20=55.09%, 50=11.98%, 100=1.57% 00:31:41.783 cpu : usr=3.45%, sys=3.35%, ctx=429, majf=0, minf=1 00:31:41.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:41.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.783 issued rwts: total=4794,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.783 job1: (groupid=0, jobs=1): err= 0: pid=2392465: Wed Nov 20 15:40:45 2024 00:31:41.783 read: IOPS=4209, BW=16.4MiB/s (17.2MB/s)(17.1MiB/1043msec) 00:31:41.783 slat (nsec): min=1047, max=10189k, avg=79384.33, stdev=589422.51 00:31:41.783 clat (usec): min=2605, max=59515, avg=12742.84, stdev=7967.03 00:31:41.783 lat (usec): min=2611, max=59518, avg=12822.22, stdev=7983.09 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 2966], 5.00th=[ 6849], 10.00th=[ 8455], 20.00th=[ 9503], 00:31:41.783 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11994], 00:31:41.783 | 70.00th=[12780], 80.00th=[13960], 90.00th=[16319], 95.00th=[19268], 00:31:41.783 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:31:41.783 | 99.99th=[59507] 00:31:41.783 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:31:41.783 slat (nsec): min=1909, max=40888k, avg=124379.60, stdev=1082077.45 00:31:41.783 clat (usec): min=993, max=58402, avg=14727.30, stdev=10028.28 00:31:41.783 lat (usec): min=998, max=70808, avg=14851.68, stdev=10142.51 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 2057], 5.00th=[ 4490], 10.00th=[ 7111], 20.00th=[ 9372], 00:31:41.783 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12125], 00:31:41.783 | 70.00th=[15008], 80.00th=[16909], 90.00th=[30016], 95.00th=[38536], 00:31:41.783 | 99.00th=[51643], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:31:41.783 | 99.99th=[58459] 00:31:41.783 bw ( KiB/s): min=12408, max=24456, per=26.51%, avg=18432.00, stdev=8519.22, samples=2 00:31:41.783 iops : min= 3102, max= 6114, avg=4608.00, stdev=2129.81, samples=2 00:31:41.783 lat (usec) : 1000=0.03% 00:31:41.783 lat (msec) : 2=0.36%, 4=2.33%, 10=26.17%, 20=59.54%, 50=9.40% 00:31:41.783 lat (msec) : 100=2.17% 00:31:41.783 cpu : usr=2.78%, sys=4.03%, ctx=387, majf=0, minf=1 00:31:41.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:41.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.783 issued rwts: total=4390,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.783 job2: (groupid=0, jobs=1): err= 0: pid=2392466: Wed Nov 20 15:40:45 2024 00:31:41.783 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:31:41.783 slat (nsec): min=1149, max=12395k, avg=93662.64, stdev=695692.24 00:31:41.783 clat (usec): min=2980, max=33278, avg=12786.09, stdev=4123.06 00:31:41.783 lat (usec): min=2989, max=33283, avg=12879.76, stdev=4164.65 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 3163], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[10159], 00:31:41.783 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11994], 60.00th=[13304], 00:31:41.783 | 70.00th=[13698], 80.00th=[15139], 90.00th=[17957], 95.00th=[20841], 00:31:41.783 | 99.00th=[27395], 99.50th=[28967], 99.90th=[32375], 99.95th=[33162], 00:31:41.783 | 99.99th=[33162] 00:31:41.783 write: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1002msec); 0 zone resets 00:31:41.783 slat (usec): min=2, max=43977, avg=128.86, stdev=1052.54 00:31:41.783 clat (usec): min=586, max=57137, avg=15136.55, stdev=9857.15 00:31:41.783 lat (usec): min=1166, max=81871, avg=15265.41, stdev=9975.59 00:31:41.783 clat percentiles (usec): 00:31:41.783 | 1.00th=[ 3228], 5.00th=[ 5669], 10.00th=[ 6783], 20.00th=[ 8848], 00:31:41.783 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12911], 00:31:41.783 | 70.00th=[13566], 80.00th=[21890], 90.00th=[31327], 95.00th=[35914], 00:31:41.783 | 99.00th=[51119], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:31:41.783 | 99.99th=[56886] 00:31:41.783 bw ( KiB/s): min=13224, max=20480, per=24.24%, avg=16852.00, stdev=5130.77, samples=2 00:31:41.783 iops : min= 3306, max= 5120, avg=4213.00, stdev=1282.69, samples=2 00:31:41.783 lat (usec) : 750=0.01% 00:31:41.783 lat (msec) : 2=0.19%, 4=1.77%, 10=20.46%, 20=64.01%, 50=12.81% 00:31:41.783 lat (msec) : 100=0.75% 00:31:41.783 cpu : usr=2.50%, sys=5.00%, ctx=351, majf=0, minf=1 00:31:41.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:41.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.783 issued rwts: total=4096,4340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.784 job3: (groupid=0, jobs=1): err= 0: pid=2392467: Wed Nov 20 15:40:45 2024 00:31:41.784 read: IOPS=3740, BW=14.6MiB/s (15.3MB/s)(15.2MiB/1043msec) 00:31:41.784 slat (nsec): min=1095, max=12624k, avg=110014.26, stdev=753865.99 00:31:41.784 clat (usec): min=5424, max=64899, avg=15422.08, stdev=9152.24 00:31:41.784 lat (usec): min=5426, max=64911, avg=15532.09, stdev=9181.81 00:31:41.784 clat percentiles (usec): 00:31:41.784 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10814], 00:31:41.784 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13435], 60.00th=[13960], 00:31:41.784 | 70.00th=[15664], 80.00th=[18482], 90.00th=[21627], 95.00th=[25560], 00:31:41.784 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:31:41.784 | 99.99th=[64750] 00:31:41.784 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:31:41.784 slat (usec): min=2, max=40885, avg=132.56, stdev=1030.14 00:31:41.784 clat (usec): min=1746, max=67604, avg=15535.28, stdev=10086.29 00:31:41.784 lat (usec): min=1750, max=77313, avg=15667.84, stdev=10202.19 00:31:41.784 clat percentiles (usec): 00:31:41.784 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10028], 00:31:41.784 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12780], 00:31:41.784 | 70.00th=[13173], 80.00th=[18482], 90.00th=[26346], 95.00th=[42730], 00:31:41.784 | 99.00th=[56361], 99.50th=[63701], 99.90th=[67634], 99.95th=[67634], 00:31:41.784 | 99.99th=[67634] 00:31:41.784 bw ( KiB/s): min=12288, max=20480, per=23.56%, avg=16384.00, stdev=5792.62, samples=2 00:31:41.784 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:31:41.784 lat (msec) : 2=0.01%, 4=0.13%, 10=17.28%, 20=66.31%, 50=13.99% 00:31:41.784 lat (msec) : 100=2.28% 00:31:41.784 cpu : usr=3.36%, sys=5.47%, ctx=347, majf=0, minf=1 00:31:41.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:41.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.784 issued rwts: total=3901,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.784 00:31:41.784 Run status group 0 (all jobs): 00:31:41.784 READ: bw=64.2MiB/s (67.3MB/s), 14.6MiB/s-17.9MiB/s (15.3MB/s-18.8MB/s), io=67.1MiB (70.4MB), run=1002-1045msec 00:31:41.784 WRITE: bw=67.9MiB/s (71.2MB/s), 15.3MiB/s-19.1MiB/s (16.1MB/s-20.1MB/s), io=71.0MiB (74.4MB), run=1002-1045msec 00:31:41.784 00:31:41.784 Disk stats (read/write): 00:31:41.784 nvme0n1: ios=4129/4229, merge=0/0, ticks=21680/19596, in_queue=41276, util=97.09% 00:31:41.784 nvme0n2: ios=3604/3824, merge=0/0, ticks=22108/29936, in_queue=52044, util=92.60% 00:31:41.784 nvme0n3: ios=3506/3584, merge=0/0, ticks=31156/42633, in_queue=73789, util=95.64% 00:31:41.784 nvme0n4: ios=3136/3584, merge=0/0, ticks=29414/43420, in_queue=72834, util=100.00% 00:31:41.784 15:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:41.784 [global] 00:31:41.784 thread=1 00:31:41.784 invalidate=1 00:31:41.784 rw=randwrite 00:31:41.784 time_based=1 00:31:41.784 runtime=1 00:31:41.784 ioengine=libaio 00:31:41.784 direct=1 00:31:41.784 bs=4096 00:31:41.784 iodepth=128 00:31:41.784 norandommap=0 00:31:41.784 numjobs=1 00:31:41.784 00:31:41.784 verify_dump=1 00:31:41.784 verify_backlog=512 00:31:41.784 verify_state_save=0 00:31:41.784 do_verify=1 00:31:41.784 verify=crc32c-intel 00:31:41.784 [job0] 00:31:41.784 filename=/dev/nvme0n1 00:31:41.784 [job1] 00:31:41.784 filename=/dev/nvme0n2 00:31:41.784 [job2] 00:31:41.784 filename=/dev/nvme0n3 00:31:41.784 [job3] 00:31:41.784 filename=/dev/nvme0n4 00:31:41.784 Could not set queue depth (nvme0n1) 00:31:41.784 Could not set queue depth (nvme0n2) 00:31:41.784 Could not set queue depth (nvme0n3) 00:31:41.784 Could not set queue depth (nvme0n4) 00:31:41.784 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.784 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.784 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.784 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:41.784 fio-3.35 00:31:41.784 Starting 4 threads 00:31:43.155 00:31:43.155 job0: (groupid=0, jobs=1): err= 0: pid=2392835: Wed Nov 20 15:40:46 2024 00:31:43.155 read: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1009msec) 00:31:43.155 slat (nsec): min=1200, max=19239k, avg=115964.48, stdev=815779.39 00:31:43.155 clat (usec): min=425, max=47947, avg=14723.59, stdev=7385.52 00:31:43.155 lat (usec): min=4120, max=47959, avg=14839.55, stdev=7434.59 00:31:43.155 clat percentiles (usec): 00:31:43.155 | 1.00th=[ 4228], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:31:43.155 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:31:43.155 | 70.00th=[12911], 80.00th=[17433], 90.00th=[28967], 95.00th=[30278], 00:31:43.155 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[47973], 00:31:43.155 | 99.99th=[47973] 00:31:43.155 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:31:43.155 slat (nsec): min=1860, max=21883k, avg=132920.01, stdev=1005726.08 00:31:43.155 clat (usec): min=3579, max=61964, avg=17451.44, stdev=9236.73 00:31:43.155 lat (usec): min=3582, max=61995, avg=17584.36, stdev=9310.31 00:31:43.155 clat percentiles (usec): 00:31:43.155 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10159], 00:31:43.155 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[16909], 00:31:43.156 | 70.00th=[21627], 80.00th=[24249], 90.00th=[30802], 95.00th=[40109], 00:31:43.156 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[53740], 00:31:43.156 | 99.99th=[62129] 00:31:43.156 bw ( KiB/s): min=14704, max=18064, per=21.59%, avg=16384.00, stdev=2375.88, samples=2 00:31:43.156 iops : min= 3676, max= 4516, avg=4096.00, stdev=593.97, samples=2 00:31:43.156 lat (usec) : 500=0.01% 00:31:43.156 lat (msec) : 4=0.08%, 10=14.57%, 20=59.13%, 50=26.17%, 100=0.04% 00:31:43.156 cpu : usr=2.68%, sys=4.27%, ctx=227, majf=0, minf=1 00:31:43.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:43.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.156 issued rwts: total=3803,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.156 job1: (groupid=0, jobs=1): err= 0: pid=2392836: Wed Nov 20 15:40:46 2024 00:31:43.156 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:31:43.156 slat (nsec): min=1308, max=11858k, avg=87054.77, stdev=573260.53 00:31:43.156 clat (usec): min=3163, max=36600, avg=12248.79, stdev=3827.40 00:31:43.156 lat (usec): min=3168, max=36606, avg=12335.84, stdev=3860.29 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 3949], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9896], 00:31:43.156 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:31:43.156 | 70.00th=[13173], 80.00th=[14353], 90.00th=[17433], 95.00th=[18744], 00:31:43.156 | 99.00th=[22152], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:31:43.156 | 99.99th=[36439] 00:31:43.156 write: IOPS=5179, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec); 0 zone resets 00:31:43.156 slat (nsec): min=1977, max=21246k, avg=89385.19, stdev=683014.38 00:31:43.156 clat (usec): min=422, max=40299, avg=12440.67, stdev=5281.87 00:31:43.156 lat (usec): min=429, max=40309, avg=12530.05, stdev=5330.96 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 4113], 5.00th=[ 6456], 10.00th=[ 8586], 20.00th=[10028], 00:31:43.156 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[12125], 00:31:43.156 | 70.00th=[12649], 80.00th=[14353], 90.00th=[16188], 95.00th=[20841], 00:31:43.156 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[39060], 00:31:43.156 | 99.99th=[40109] 00:31:43.156 bw ( KiB/s): min=20480, max=20480, per=26.98%, avg=20480.00, stdev= 0.00, samples=2 00:31:43.156 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:31:43.156 lat (usec) : 500=0.04%, 750=0.21% 00:31:43.156 lat (msec) : 2=0.03%, 4=0.83%, 10=19.27%, 20=75.20%, 50=4.41% 00:31:43.156 cpu : usr=3.89%, sys=5.29%, ctx=382, majf=0, minf=2 00:31:43.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:43.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.156 issued rwts: total=5120,5195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.156 job2: (groupid=0, jobs=1): err= 0: pid=2392837: Wed Nov 20 15:40:46 2024 00:31:43.156 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:31:43.156 slat (nsec): min=1383, max=21254k, avg=113808.21, stdev=928983.57 00:31:43.156 clat (usec): min=3617, max=53625, avg=14395.11, stdev=6874.32 00:31:43.156 lat (usec): min=3624, max=53635, avg=14508.92, stdev=6944.70 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 6783], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10683], 00:31:43.156 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:31:43.156 | 70.00th=[13698], 80.00th=[15401], 90.00th=[19792], 95.00th=[31851], 00:31:43.156 | 99.00th=[42730], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:31:43.156 | 99.99th=[53740] 00:31:43.156 write: IOPS=4973, BW=19.4MiB/s (20.4MB/s)(19.6MiB/1008msec); 0 zone resets 00:31:43.156 slat (usec): min=2, max=13612, avg=86.10, stdev=623.13 00:31:43.156 clat (usec): min=2107, max=49331, avg=12133.01, stdev=4624.27 00:31:43.156 lat (usec): min=2117, max=49355, avg=12219.10, stdev=4671.49 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 3916], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 9634], 00:31:43.156 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:31:43.156 | 70.00th=[12911], 80.00th=[13435], 90.00th=[15664], 95.00th=[17433], 00:31:43.156 | 99.00th=[35390], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:31:43.156 | 99.99th=[49546] 00:31:43.156 bw ( KiB/s): min=16432, max=22648, per=25.74%, avg=19540.00, stdev=4395.38, samples=2 00:31:43.156 iops : min= 4108, max= 5662, avg=4885.00, stdev=1098.84, samples=2 00:31:43.156 lat (msec) : 4=0.82%, 10=16.00%, 20=76.90%, 50=5.95%, 100=0.33% 00:31:43.156 cpu : usr=3.57%, sys=6.26%, ctx=355, majf=0, minf=1 00:31:43.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:43.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.156 issued rwts: total=4608,5013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.156 job3: (groupid=0, jobs=1): err= 0: pid=2392838: Wed Nov 20 15:40:46 2024 00:31:43.156 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:31:43.156 slat (nsec): min=1105, max=11039k, avg=96512.86, stdev=669122.83 00:31:43.156 clat (usec): min=5625, max=37113, avg=13861.60, stdev=4382.82 00:31:43.156 lat (usec): min=5627, max=37116, avg=13958.11, stdev=4416.17 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 5997], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10945], 00:31:43.156 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13042], 60.00th=[13960], 00:31:43.156 | 70.00th=[14615], 80.00th=[16057], 90.00th=[19530], 95.00th=[23200], 00:31:43.156 | 99.00th=[26608], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:31:43.156 | 99.99th=[36963] 00:31:43.156 write: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:31:43.156 slat (nsec): min=1921, max=12533k, avg=94556.08, stdev=620153.44 00:31:43.156 clat (usec): min=621, max=53168, avg=13083.69, stdev=4453.27 00:31:43.156 lat (usec): min=655, max=53171, avg=13178.25, stdev=4500.75 00:31:43.156 clat percentiles (usec): 00:31:43.156 | 1.00th=[ 5538], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[10683], 00:31:43.156 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12518], 00:31:43.156 | 70.00th=[13435], 80.00th=[16057], 90.00th=[18744], 95.00th=[20841], 00:31:43.156 | 99.00th=[28443], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:31:43.156 | 99.99th=[53216] 00:31:43.156 bw ( KiB/s): min=17240, max=20480, per=24.85%, avg=18860.00, stdev=2291.03, samples=2 00:31:43.156 iops : min= 4310, max= 5120, avg=4715.00, stdev=572.76, samples=2 00:31:43.156 lat (usec) : 750=0.01% 00:31:43.156 lat (msec) : 4=0.10%, 10=13.65%, 20=78.23%, 50=8.00%, 100=0.01% 00:31:43.156 cpu : usr=4.19%, sys=4.59%, ctx=378, majf=0, minf=1 00:31:43.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:43.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.156 issued rwts: total=4608,4842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.156 00:31:43.156 Run status group 0 (all jobs): 00:31:43.156 READ: bw=70.2MiB/s (73.6MB/s), 14.7MiB/s-19.9MiB/s (15.4MB/s-20.9MB/s), io=70.9MiB (74.3MB), run=1003-1009msec 00:31:43.156 WRITE: bw=74.1MiB/s (77.7MB/s), 15.9MiB/s-20.2MiB/s (16.6MB/s-21.2MB/s), io=74.8MiB (78.4MB), run=1003-1009msec 00:31:43.156 00:31:43.156 Disk stats (read/write): 00:31:43.156 nvme0n1: ios=3479/3584, merge=0/0, ticks=18863/21730, in_queue=40593, util=97.90% 00:31:43.156 nvme0n2: ios=4135/4608, merge=0/0, ticks=24424/31003, in_queue=55427, util=86.70% 00:31:43.156 nvme0n3: ios=3745/4096, merge=0/0, ticks=42462/34097, in_queue=76559, util=96.46% 00:31:43.156 nvme0n4: ios=4141/4181, merge=0/0, ticks=32696/30272, in_queue=62968, util=96.23% 00:31:43.156 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:43.156 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2393071 00:31:43.156 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:43.156 15:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:43.156 [global] 00:31:43.156 thread=1 00:31:43.156 invalidate=1 00:31:43.156 rw=read 00:31:43.156 time_based=1 00:31:43.156 runtime=10 00:31:43.156 ioengine=libaio 00:31:43.156 direct=1 00:31:43.156 bs=4096 00:31:43.156 iodepth=1 00:31:43.156 norandommap=1 00:31:43.156 numjobs=1 00:31:43.156 00:31:43.156 [job0] 00:31:43.156 filename=/dev/nvme0n1 00:31:43.156 [job1] 00:31:43.156 filename=/dev/nvme0n2 00:31:43.156 [job2] 00:31:43.156 filename=/dev/nvme0n3 00:31:43.156 [job3] 00:31:43.156 filename=/dev/nvme0n4 00:31:43.156 Could not set queue depth (nvme0n1) 00:31:43.156 Could not set queue depth (nvme0n2) 00:31:43.156 Could not set queue depth (nvme0n3) 00:31:43.156 Could not set queue depth (nvme0n4) 00:31:43.413 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.413 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.413 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.413 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.413 fio-3.35 00:31:43.414 Starting 4 threads 00:31:46.688 15:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:46.688 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:46.688 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47038464, buflen=4096 00:31:46.688 fio: pid=2393212, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.688 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.688 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:46.688 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:31:46.688 fio: pid=2393211, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.688 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.688 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:46.688 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=30162944, buflen=4096 00:31:46.688 fio: pid=2393209, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.945 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1806336, buflen=4096 00:31:46.945 fio: pid=2393210, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:46.945 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.945 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:46.945 00:31:46.945 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393209: Wed Nov 20 15:40:50 2024 00:31:46.945 read: IOPS=2321, BW=9283KiB/s (9506kB/s)(28.8MiB/3173msec) 00:31:46.945 slat (usec): min=6, max=16766, avg= 9.59, stdev=195.29 00:31:46.945 clat (usec): min=179, max=42262, avg=417.27, stdev=2769.73 00:31:46.945 lat (usec): min=186, max=59029, avg=426.86, stdev=2811.54 00:31:46.945 clat percentiles (usec): 00:31:46.945 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 223], 00:31:46.945 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 231], 00:31:46.945 | 70.00th=[ 233], 80.00th=[ 235], 90.00th=[ 239], 95.00th=[ 243], 00:31:46.945 | 99.00th=[ 255], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41681], 00:31:46.945 | 99.99th=[42206] 00:31:46.945 bw ( KiB/s): min= 113, max=17032, per=42.39%, avg=9810.83, stdev=7935.45, samples=6 00:31:46.945 iops : min= 28, max= 4258, avg=2452.67, stdev=1983.92, samples=6 00:31:46.945 lat (usec) : 250=98.38%, 500=1.13%, 750=0.01% 00:31:46.945 lat (msec) : 50=0.46% 00:31:46.945 cpu : usr=0.54%, sys=2.14%, ctx=7366, majf=0, minf=1 00:31:46.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 issued rwts: total=7365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.945 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393210: Wed Nov 20 15:40:50 2024 00:31:46.945 read: IOPS=132, BW=527KiB/s (540kB/s)(1764KiB/3347msec) 00:31:46.945 slat (usec): min=4, max=11457, avg=36.23, stdev=544.52 00:31:46.945 clat (usec): min=198, max=41993, avg=7526.20, stdev=15646.23 00:31:46.945 lat (usec): min=204, max=42016, avg=7562.44, stdev=15648.62 00:31:46.945 clat percentiles (usec): 00:31:46.945 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:31:46.945 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:31:46.945 | 70.00th=[ 235], 80.00th=[ 255], 90.00th=[41157], 95.00th=[41157], 00:31:46.945 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:46.945 | 99.99th=[42206] 00:31:46.945 bw ( KiB/s): min= 96, max= 2864, per=2.47%, avg=572.33, stdev=1122.86, samples=6 00:31:46.945 iops : min= 24, max= 716, avg=143.00, stdev=280.75, samples=6 00:31:46.945 lat (usec) : 250=79.41%, 500=2.26%, 750=0.23% 00:31:46.945 lat (msec) : 50=17.87% 00:31:46.945 cpu : usr=0.09%, sys=0.09%, ctx=448, majf=0, minf=1 00:31:46.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.945 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393211: Wed Nov 20 15:40:50 2024 00:31:46.945 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2934msec) 00:31:46.945 slat (nsec): min=9223, max=32581, avg=22449.42, stdev=2791.10 00:31:46.945 clat (usec): min=284, max=41895, avg=39875.73, stdev=6682.82 00:31:46.945 lat (usec): min=295, max=41918, avg=39898.17, stdev=6682.91 00:31:46.945 clat percentiles (usec): 00:31:46.945 | 1.00th=[ 285], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:46.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.945 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.945 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:46.945 | 99.99th=[41681] 00:31:46.945 bw ( KiB/s): min= 96, max= 112, per=0.43%, avg=100.80, stdev= 7.16, samples=5 00:31:46.945 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:31:46.945 lat (usec) : 500=2.70% 00:31:46.945 lat (msec) : 50=95.95% 00:31:46.945 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:31:46.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.945 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.945 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393212: Wed Nov 20 15:40:50 2024 00:31:46.946 read: IOPS=4220, BW=16.5MiB/s (17.3MB/s)(44.9MiB/2721msec) 00:31:46.946 slat (nsec): min=6577, max=50436, avg=9294.88, stdev=1858.98 00:31:46.946 clat (usec): min=185, max=3025, avg=224.02, stdev=38.38 00:31:46.946 lat (usec): min=193, max=3033, avg=233.31, stdev=38.65 00:31:46.946 clat percentiles (usec): 00:31:46.946 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 212], 00:31:46.946 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:31:46.946 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 245], 00:31:46.946 | 99.00th=[ 416], 99.50th=[ 424], 99.90th=[ 445], 99.95th=[ 519], 00:31:46.946 | 99.99th=[ 570] 00:31:46.946 bw ( KiB/s): min=16864, max=17488, per=73.54%, avg=17017.60, stdev=264.21, samples=5 00:31:46.946 iops : min= 4216, max= 4372, avg=4254.40, stdev=66.05, samples=5 00:31:46.946 lat (usec) : 250=96.51%, 500=3.42%, 750=0.05% 00:31:46.946 lat (msec) : 4=0.01% 00:31:46.946 cpu : usr=1.69%, sys=6.99%, ctx=11485, majf=0, minf=2 00:31:46.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.946 issued rwts: total=11485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.946 00:31:46.946 Run status group 0 (all jobs): 00:31:46.946 READ: bw=22.6MiB/s (23.7MB/s), 99.5KiB/s-16.5MiB/s (102kB/s-17.3MB/s), io=75.6MiB (79.3MB), run=2721-3347msec 00:31:46.946 00:31:46.946 Disk stats (read/write): 00:31:46.946 nvme0n1: ios=7361/0, merge=0/0, ticks=2941/0, in_queue=2941, util=95.22% 00:31:46.946 nvme0n2: ios=468/0, merge=0/0, ticks=3987/0, in_queue=3987, util=98.92% 00:31:46.946 nvme0n3: ios=71/0, merge=0/0, ticks=2831/0, in_queue=2831, util=96.52% 00:31:46.946 nvme0n4: ios=11086/0, merge=0/0, ticks=2369/0, in_queue=2369, util=96.45% 00:31:47.202 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.202 15:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:47.459 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.459 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:47.459 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.459 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:47.715 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:47.715 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2393071 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:47.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:47.972 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:48.229 nvmf hotplug test: fio failed as expected 00:31:48.229 15:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.229 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.229 rmmod nvme_tcp 00:31:48.488 rmmod nvme_fabrics 00:31:48.488 rmmod nvme_keyring 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2390380 ']' 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2390380 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2390380 ']' 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2390380 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390380 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390380' 00:31:48.488 killing process with pid 2390380 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2390380 00:31:48.488 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2390380 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.747 15:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.649 00:31:50.649 real 0m26.411s 00:31:50.649 user 1m30.667s 00:31:50.649 sys 0m11.361s 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.649 ************************************ 00:31:50.649 END TEST nvmf_fio_target 00:31:50.649 ************************************ 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.649 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.909 ************************************ 00:31:50.909 START TEST nvmf_bdevio 00:31:50.909 ************************************ 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:50.909 * Looking for test storage... 00:31:50.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.909 --rc genhtml_branch_coverage=1 00:31:50.909 --rc genhtml_function_coverage=1 00:31:50.909 --rc genhtml_legend=1 00:31:50.909 --rc geninfo_all_blocks=1 00:31:50.909 --rc geninfo_unexecuted_blocks=1 00:31:50.909 00:31:50.909 ' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.909 --rc genhtml_branch_coverage=1 00:31:50.909 --rc genhtml_function_coverage=1 00:31:50.909 --rc genhtml_legend=1 00:31:50.909 --rc geninfo_all_blocks=1 00:31:50.909 --rc geninfo_unexecuted_blocks=1 00:31:50.909 00:31:50.909 ' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.909 --rc genhtml_branch_coverage=1 00:31:50.909 --rc genhtml_function_coverage=1 00:31:50.909 --rc genhtml_legend=1 00:31:50.909 --rc geninfo_all_blocks=1 00:31:50.909 --rc geninfo_unexecuted_blocks=1 00:31:50.909 00:31:50.909 ' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.909 --rc genhtml_branch_coverage=1 00:31:50.909 --rc genhtml_function_coverage=1 00:31:50.909 --rc genhtml_legend=1 00:31:50.909 --rc geninfo_all_blocks=1 00:31:50.909 --rc geninfo_unexecuted_blocks=1 00:31:50.909 00:31:50.909 ' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.909 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.910 15:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:57.475 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:57.475 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.475 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:57.475 Found net devices under 0000:86:00.0: cvl_0_0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:57.476 Found net devices under 0000:86:00.1: cvl_0_1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:31:57.476 00:31:57.476 --- 10.0.0.2 ping statistics --- 00:31:57.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.476 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:57.476 00:31:57.476 --- 10.0.0.1 ping statistics --- 00:31:57.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.476 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2397511 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2397511 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2397511 ']' 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.476 [2024-11-20 15:41:00.740359] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:57.476 [2024-11-20 15:41:00.741317] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:31:57.476 [2024-11-20 15:41:00.741354] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.476 [2024-11-20 15:41:00.821586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.476 [2024-11-20 15:41:00.864880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.476 [2024-11-20 15:41:00.864915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.476 [2024-11-20 15:41:00.864923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.476 [2024-11-20 15:41:00.864930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.476 [2024-11-20 15:41:00.864935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.476 [2024-11-20 15:41:00.866444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:57.476 [2024-11-20 15:41:00.866552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:57.476 [2024-11-20 15:41:00.866683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:57.476 [2024-11-20 15:41:00.866684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:57.476 [2024-11-20 15:41:00.934523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:57.476 [2024-11-20 15:41:00.935002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:57.476 [2024-11-20 15:41:00.935414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:57.476 [2024-11-20 15:41:00.935559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:57.476 [2024-11-20 15:41:00.935618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.476 15:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.476 [2024-11-20 15:41:01.007436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.476 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.476 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.477 Malloc0 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.477 [2024-11-20 15:41:01.087630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.477 { 00:31:57.477 "params": { 00:31:57.477 "name": "Nvme$subsystem", 00:31:57.477 "trtype": "$TEST_TRANSPORT", 00:31:57.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.477 "adrfam": "ipv4", 00:31:57.477 "trsvcid": "$NVMF_PORT", 00:31:57.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.477 "hdgst": ${hdgst:-false}, 00:31:57.477 "ddgst": ${ddgst:-false} 00:31:57.477 }, 00:31:57.477 "method": "bdev_nvme_attach_controller" 00:31:57.477 } 00:31:57.477 EOF 00:31:57.477 )") 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:57.477 15:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:57.477 "params": { 00:31:57.477 "name": "Nvme1", 00:31:57.477 "trtype": "tcp", 00:31:57.477 "traddr": "10.0.0.2", 00:31:57.477 "adrfam": "ipv4", 00:31:57.477 "trsvcid": "4420", 00:31:57.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:57.477 "hdgst": false, 00:31:57.477 "ddgst": false 00:31:57.477 }, 00:31:57.477 "method": "bdev_nvme_attach_controller" 00:31:57.477 }' 00:31:57.477 [2024-11-20 15:41:01.137531] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:31:57.477 [2024-11-20 15:41:01.137577] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397695 ] 00:31:57.477 [2024-11-20 15:41:01.215646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:57.477 [2024-11-20 15:41:01.260036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.477 [2024-11-20 15:41:01.260141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.477 [2024-11-20 15:41:01.260141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.734 I/O targets: 00:31:57.734 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:57.734 00:31:57.734 00:31:57.734 CUnit - A unit testing framework for C - Version 2.1-3 00:31:57.734 http://cunit.sourceforge.net/ 00:31:57.734 00:31:57.734 00:31:57.734 Suite: bdevio tests on: Nvme1n1 00:31:57.734 Test: blockdev write read block ...passed 00:31:57.734 Test: blockdev write zeroes read block ...passed 00:31:57.734 Test: blockdev write zeroes read no split ...passed 00:31:57.734 Test: blockdev write zeroes read split ...passed 00:31:57.990 Test: blockdev write zeroes read split partial ...passed 00:31:57.990 Test: blockdev reset ...[2024-11-20 15:41:01.644474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:57.990 [2024-11-20 15:41:01.644538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0e340 (9): Bad file descriptor 00:31:57.990 [2024-11-20 15:41:01.648352] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:57.990 passed 00:31:57.990 Test: blockdev write read 8 blocks ...passed 00:31:57.990 Test: blockdev write read size > 128k ...passed 00:31:57.990 Test: blockdev write read invalid size ...passed 00:31:57.990 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:57.990 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:57.990 Test: blockdev write read max offset ...passed 00:31:57.990 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:57.990 Test: blockdev writev readv 8 blocks ...passed 00:31:57.990 Test: blockdev writev readv 30 x 1block ...passed 00:31:57.990 Test: blockdev writev readv block ...passed 00:31:57.990 Test: blockdev writev readv size > 128k ...passed 00:31:57.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:57.990 Test: blockdev comparev and writev ...[2024-11-20 15:41:01.860945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.990 [2024-11-20 15:41:01.860979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.990 [2024-11-20 15:41:01.860993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.990 [2024-11-20 15:41:01.861002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.990 [2024-11-20 15:41:01.861293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.990 [2024-11-20 15:41:01.861305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.990 [2024-11-20 15:41:01.861317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.990 [2024-11-20 15:41:01.861324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.991 [2024-11-20 15:41:01.861608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.991 [2024-11-20 15:41:01.861624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.991 [2024-11-20 15:41:01.861636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.991 [2024-11-20 15:41:01.861644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.991 [2024-11-20 15:41:01.861940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.991 [2024-11-20 15:41:01.861956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.991 [2024-11-20 15:41:01.861968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:57.991 [2024-11-20 15:41:01.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:58.248 passed 00:31:58.248 Test: blockdev nvme passthru rw ...passed 00:31:58.248 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:41:01.944266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.248 [2024-11-20 15:41:01.944290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:58.248 [2024-11-20 15:41:01.944418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.248 [2024-11-20 15:41:01.944429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:58.248 [2024-11-20 15:41:01.944538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.248 [2024-11-20 15:41:01.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:58.248 [2024-11-20 15:41:01.944658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.248 [2024-11-20 15:41:01.944669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:58.248 passed 00:31:58.248 Test: blockdev nvme admin passthru ...passed 00:31:58.248 Test: blockdev copy ...passed 00:31:58.248 00:31:58.248 Run Summary: Type Total Ran Passed Failed Inactive 00:31:58.248 suites 1 1 n/a 0 0 00:31:58.248 tests 23 23 23 0 0 00:31:58.248 asserts 152 152 152 0 n/a 00:31:58.248 00:31:58.248 Elapsed time = 0.930 seconds 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.248 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:58.249 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.249 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.249 rmmod nvme_tcp 00:31:58.507 rmmod nvme_fabrics 00:31:58.507 rmmod nvme_keyring 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2397511 ']' 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2397511 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2397511 ']' 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2397511 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397511 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397511' 00:31:58.507 killing process with pid 2397511 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2397511 00:31:58.507 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2397511 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.773 15:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.684 00:32:00.684 real 0m9.958s 00:32:00.684 user 0m8.650s 00:32:00.684 sys 0m5.263s 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.684 ************************************ 00:32:00.684 END TEST nvmf_bdevio 00:32:00.684 ************************************ 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:00.684 00:32:00.684 real 4m33.407s 00:32:00.684 user 9m7.698s 00:32:00.684 sys 1m53.948s 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.684 15:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.684 ************************************ 00:32:00.684 END TEST nvmf_target_core_interrupt_mode 00:32:00.684 ************************************ 00:32:00.684 15:41:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.684 15:41:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.684 15:41:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.684 15:41:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.943 ************************************ 00:32:00.943 START TEST nvmf_interrupt 00:32:00.943 ************************************ 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:00.943 * Looking for test storage... 00:32:00.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.943 --rc genhtml_branch_coverage=1 00:32:00.943 --rc genhtml_function_coverage=1 00:32:00.943 --rc genhtml_legend=1 00:32:00.943 --rc geninfo_all_blocks=1 00:32:00.943 --rc geninfo_unexecuted_blocks=1 00:32:00.943 00:32:00.943 ' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.943 --rc genhtml_branch_coverage=1 00:32:00.943 --rc genhtml_function_coverage=1 00:32:00.943 --rc genhtml_legend=1 00:32:00.943 --rc geninfo_all_blocks=1 00:32:00.943 --rc geninfo_unexecuted_blocks=1 00:32:00.943 00:32:00.943 ' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.943 --rc genhtml_branch_coverage=1 00:32:00.943 --rc genhtml_function_coverage=1 00:32:00.943 --rc genhtml_legend=1 00:32:00.943 --rc geninfo_all_blocks=1 00:32:00.943 --rc geninfo_unexecuted_blocks=1 00:32:00.943 00:32:00.943 ' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.943 --rc genhtml_branch_coverage=1 00:32:00.943 --rc genhtml_function_coverage=1 00:32:00.943 --rc genhtml_legend=1 00:32:00.943 --rc geninfo_all_blocks=1 00:32:00.943 --rc geninfo_unexecuted_blocks=1 00:32:00.943 00:32:00.943 ' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.943 15:41:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.944 15:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:07.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:07.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.512 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:07.513 Found net devices under 0000:86:00.0: cvl_0_0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:07.513 Found net devices under 0000:86:00.1: cvl_0_1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:32:07.513 00:32:07.513 --- 10.0.0.2 ping statistics --- 00:32:07.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.513 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:32:07.513 00:32:07.513 --- 10.0.0.1 ping statistics --- 00:32:07.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.513 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2401240 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2401240 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2401240 ']' 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.513 [2024-11-20 15:41:10.740223] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.513 [2024-11-20 15:41:10.741296] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:32:07.513 [2024-11-20 15:41:10.741339] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.513 [2024-11-20 15:41:10.824380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.513 [2024-11-20 15:41:10.869715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.513 [2024-11-20 15:41:10.869750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.513 [2024-11-20 15:41:10.869758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.513 [2024-11-20 15:41:10.869765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.513 [2024-11-20 15:41:10.869770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.513 [2024-11-20 15:41:10.870921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.513 [2024-11-20 15:41:10.870922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.513 [2024-11-20 15:41:10.938486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.513 [2024-11-20 15:41:10.939005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.513 [2024-11-20 15:41:10.939226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.513 15:41:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:07.513 5000+0 records in 00:32:07.513 5000+0 records out 00:32:07.513 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184348 s, 555 MB/s 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.513 AIO0 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.513 [2024-11-20 15:41:11.087744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:07.513 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.514 [2024-11-20 15:41:11.128104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2401240 0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 0 idle 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401240 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0' 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401240 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2401240 1 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 1 idle 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:07.514 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401250 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401250 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2401501 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2401240 0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2401240 0 busy 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.773 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401240 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.26 reactor_0' 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401240 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.26 reactor_0 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:08.030 15:41:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:08.961 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:08.961 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.961 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:08.961 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.961 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401240 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.48 reactor_0' 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401240 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.48 reactor_0 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2401240 1 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2401240 1 busy 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:09.218 15:41:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401250 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.29 reactor_1' 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401250 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.29 reactor_1 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:09.218 15:41:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2401501 00:32:19.174 Initializing NVMe Controllers 00:32:19.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.174 Controller IO queue size 256, less than required. 00:32:19.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:19.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:19.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:19.174 Initialization complete. Launching workers. 00:32:19.174 ======================================================== 00:32:19.174 Latency(us) 00:32:19.174 Device Information : IOPS MiB/s Average min max 00:32:19.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16159.47 63.12 15849.57 3585.03 32121.23 00:32:19.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16312.67 63.72 15697.94 7448.41 27272.54 00:32:19.174 ======================================================== 00:32:19.174 Total : 32472.13 126.84 15773.40 3585.03 32121.23 00:32:19.174 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2401240 0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 0 idle 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401240 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401240 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2401240 1 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 1 idle 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:19.174 15:41:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401250 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1' 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401250 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.174 15:41:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:19.175 15:41:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2401240 0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 0 idle 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401240 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.53 reactor_0' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401240 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.53 reactor_0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2401240 1 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2401240 1 idle 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2401240 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2401240 -w 256 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2401250 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2401250 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:21.080 15:41:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:21.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.339 rmmod nvme_tcp 00:32:21.339 rmmod nvme_fabrics 00:32:21.339 rmmod nvme_keyring 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2401240 ']' 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2401240 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2401240 ']' 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2401240 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401240 00:32:21.339 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401240' 00:32:21.598 killing process with pid 2401240 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2401240 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2401240 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.598 15:41:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.148 15:41:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.148 00:32:24.148 real 0m22.891s 00:32:24.148 user 0m39.746s 00:32:24.148 sys 0m8.468s 00:32:24.148 15:41:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.148 15:41:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:24.148 ************************************ 00:32:24.148 END TEST nvmf_interrupt 00:32:24.148 ************************************ 00:32:24.148 00:32:24.148 real 27m23.423s 00:32:24.148 user 56m27.265s 00:32:24.148 sys 9m21.374s 00:32:24.148 15:41:27 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.148 15:41:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.149 ************************************ 00:32:24.149 END TEST nvmf_tcp 00:32:24.149 ************************************ 00:32:24.149 15:41:27 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:24.149 15:41:27 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:24.149 15:41:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:24.149 15:41:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.149 15:41:27 -- common/autotest_common.sh@10 -- # set +x 00:32:24.149 ************************************ 00:32:24.149 START TEST spdkcli_nvmf_tcp 00:32:24.149 ************************************ 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:24.149 * Looking for test storage... 00:32:24.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.149 --rc genhtml_branch_coverage=1 00:32:24.149 --rc genhtml_function_coverage=1 00:32:24.149 --rc genhtml_legend=1 00:32:24.149 --rc geninfo_all_blocks=1 00:32:24.149 --rc geninfo_unexecuted_blocks=1 00:32:24.149 00:32:24.149 ' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.149 --rc genhtml_branch_coverage=1 00:32:24.149 --rc genhtml_function_coverage=1 00:32:24.149 --rc genhtml_legend=1 00:32:24.149 --rc geninfo_all_blocks=1 00:32:24.149 --rc geninfo_unexecuted_blocks=1 00:32:24.149 00:32:24.149 ' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.149 --rc genhtml_branch_coverage=1 00:32:24.149 --rc genhtml_function_coverage=1 00:32:24.149 --rc genhtml_legend=1 00:32:24.149 --rc geninfo_all_blocks=1 00:32:24.149 --rc geninfo_unexecuted_blocks=1 00:32:24.149 00:32:24.149 ' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.149 --rc genhtml_branch_coverage=1 00:32:24.149 --rc genhtml_function_coverage=1 00:32:24.149 --rc genhtml_legend=1 00:32:24.149 --rc geninfo_all_blocks=1 00:32:24.149 --rc geninfo_unexecuted_blocks=1 00:32:24.149 00:32:24.149 ' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.149 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:24.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2404193 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2404193 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2404193 ']' 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.150 15:41:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.150 [2024-11-20 15:41:27.861098] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:32:24.150 [2024-11-20 15:41:27.861150] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404193 ] 00:32:24.150 [2024-11-20 15:41:27.919429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:24.150 [2024-11-20 15:41:27.964184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.150 [2024-11-20 15:41:27.964187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.409 15:41:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:24.409 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:24.409 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:24.409 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:24.409 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:24.409 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:24.409 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:24.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.409 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:24.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:24.409 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:24.409 ' 00:32:26.940 [2024-11-20 15:41:30.791095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.311 [2024-11-20 15:41:32.131725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:30.837 [2024-11-20 15:41:34.623277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:33.370 [2024-11-20 15:41:36.778053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:34.745 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:34.745 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:34.745 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.745 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.745 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:34.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:34.745 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:34.745 15:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:35.312 15:41:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.312 15:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:35.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:35.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:35.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:35.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:35.312 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:35.312 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:35.312 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:35.312 ' 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:41.872 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:41.872 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:41.872 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2404193 ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2404193' 00:32:41.872 killing process with pid 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2404193 ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2404193 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2404193 ']' 00:32:41.872 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2404193 00:32:41.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2404193) - No such process 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2404193 is not found' 00:32:41.873 Process with pid 2404193 is not found 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:41.873 00:32:41.873 real 0m17.330s 00:32:41.873 user 0m38.234s 00:32:41.873 sys 0m0.781s 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.873 15:41:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.873 ************************************ 00:32:41.873 END TEST spdkcli_nvmf_tcp 00:32:41.873 ************************************ 00:32:41.873 15:41:44 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.873 15:41:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.873 15:41:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.873 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:32:41.873 ************************************ 00:32:41.873 START TEST nvmf_identify_passthru 00:32:41.873 ************************************ 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:41.873 * Looking for test storage... 00:32:41.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.873 --rc genhtml_branch_coverage=1 00:32:41.873 --rc genhtml_function_coverage=1 00:32:41.873 --rc genhtml_legend=1 00:32:41.873 --rc geninfo_all_blocks=1 00:32:41.873 --rc geninfo_unexecuted_blocks=1 00:32:41.873 00:32:41.873 ' 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.873 --rc genhtml_branch_coverage=1 00:32:41.873 --rc genhtml_function_coverage=1 00:32:41.873 --rc genhtml_legend=1 00:32:41.873 --rc geninfo_all_blocks=1 00:32:41.873 --rc geninfo_unexecuted_blocks=1 00:32:41.873 00:32:41.873 ' 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.873 --rc genhtml_branch_coverage=1 00:32:41.873 --rc genhtml_function_coverage=1 00:32:41.873 --rc genhtml_legend=1 00:32:41.873 --rc geninfo_all_blocks=1 00:32:41.873 --rc geninfo_unexecuted_blocks=1 00:32:41.873 00:32:41.873 ' 00:32:41.873 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.873 --rc genhtml_branch_coverage=1 00:32:41.873 --rc genhtml_function_coverage=1 00:32:41.873 --rc genhtml_legend=1 00:32:41.873 --rc geninfo_all_blocks=1 00:32:41.873 --rc geninfo_unexecuted_blocks=1 00:32:41.873 00:32:41.873 ' 00:32:41.873 15:41:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.873 15:41:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.873 15:41:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.873 15:41:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.873 15:41:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.873 15:41:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.873 15:41:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.873 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.874 15:41:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.874 15:41:45 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.874 15:41:45 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.874 15:41:45 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.874 15:41:45 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.874 15:41:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.874 15:41:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.874 15:41:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.874 15:41:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:41.874 15:41:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.874 15:41:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.874 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.874 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.874 15:41:45 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.874 15:41:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.274 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:47.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:47.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:47.275 Found net devices under 0000:86:00.0: cvl_0_0 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:47.275 Found net devices under 0000:86:00.1: cvl_0_1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.275 15:41:50 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:32:47.275 00:32:47.275 --- 10.0.0.2 ping statistics --- 00:32:47.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.275 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:32:47.275 00:32:47.275 --- 10.0.0.1 ping statistics --- 00:32:47.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.275 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.275 15:41:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.275 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:47.275 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:47.275 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:47.276 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:47.276 15:41:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:47.276 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:47.276 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:47.276 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:47.276 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:47.276 15:41:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:51.462 15:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:51.462 15:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:51.462 15:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:51.462 15:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2411470 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:55.654 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2411470 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2411470 ']' 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.654 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.913 [2024-11-20 15:41:59.569199] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:32:55.913 [2024-11-20 15:41:59.569244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.913 [2024-11-20 15:41:59.649374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.913 [2024-11-20 15:41:59.692601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.913 [2024-11-20 15:41:59.692640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.913 [2024-11-20 15:41:59.692647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.913 [2024-11-20 15:41:59.692654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.913 [2024-11-20 15:41:59.692660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.913 [2024-11-20 15:41:59.694230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.913 [2024-11-20 15:41:59.694339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.913 [2024-11-20 15:41:59.694447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.913 [2024-11-20 15:41:59.694447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:55.913 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.913 INFO: Log level set to 20 00:32:55.913 INFO: Requests: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "method": "nvmf_set_config", 00:32:55.913 "id": 1, 00:32:55.913 "params": { 00:32:55.913 "admin_cmd_passthru": { 00:32:55.913 "identify_ctrlr": true 00:32:55.913 } 00:32:55.913 } 00:32:55.913 } 00:32:55.913 00:32:55.913 INFO: response: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "id": 1, 00:32:55.913 "result": true 00:32:55.913 } 00:32:55.913 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.913 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.913 INFO: Setting log level to 20 00:32:55.913 INFO: Setting log level to 20 00:32:55.913 INFO: Log level set to 20 00:32:55.913 INFO: Log level set to 20 00:32:55.913 INFO: Requests: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "method": "framework_start_init", 00:32:55.913 "id": 1 00:32:55.913 } 00:32:55.913 00:32:55.913 INFO: Requests: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "method": "framework_start_init", 00:32:55.913 "id": 1 00:32:55.913 } 00:32:55.913 00:32:55.913 [2024-11-20 15:41:59.797250] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:55.913 INFO: response: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "id": 1, 00:32:55.913 "result": true 00:32:55.913 } 00:32:55.913 00:32:55.913 INFO: response: 00:32:55.913 { 00:32:55.913 "jsonrpc": "2.0", 00:32:55.913 "id": 1, 00:32:55.913 "result": true 00:32:55.913 } 00:32:55.913 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.913 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.913 INFO: Setting log level to 40 00:32:55.913 INFO: Setting log level to 40 00:32:55.913 INFO: Setting log level to 40 00:32:55.913 [2024-11-20 15:41:59.810597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.913 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.913 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.172 15:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:56.172 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.172 15:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.456 Nvme0n1 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.456 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.456 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.456 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.456 [2024-11-20 15:42:02.722059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.456 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.456 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.456 [ 00:32:59.456 { 00:32:59.456 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:59.456 "subtype": "Discovery", 00:32:59.456 "listen_addresses": [], 00:32:59.457 "allow_any_host": true, 00:32:59.457 "hosts": [] 00:32:59.457 }, 00:32:59.457 { 00:32:59.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.457 "subtype": "NVMe", 00:32:59.457 "listen_addresses": [ 00:32:59.457 { 00:32:59.457 "trtype": "TCP", 00:32:59.457 "adrfam": "IPv4", 00:32:59.457 "traddr": "10.0.0.2", 00:32:59.457 "trsvcid": "4420" 00:32:59.457 } 00:32:59.457 ], 00:32:59.457 "allow_any_host": true, 00:32:59.457 "hosts": [], 00:32:59.457 "serial_number": "SPDK00000000000001", 00:32:59.457 "model_number": "SPDK bdev Controller", 00:32:59.457 "max_namespaces": 1, 00:32:59.457 "min_cntlid": 1, 00:32:59.457 "max_cntlid": 65519, 00:32:59.457 "namespaces": [ 00:32:59.457 { 00:32:59.457 "nsid": 1, 00:32:59.457 "bdev_name": "Nvme0n1", 00:32:59.457 "name": "Nvme0n1", 00:32:59.457 "nguid": "9E9F001549034DA7AF644D8D704F8837", 00:32:59.457 "uuid": "9e9f0015-4903-4da7-af64-4d8d704f8837" 00:32:59.457 } 00:32:59.457 ] 00:32:59.457 } 00:32:59.457 ] 00:32:59.457 15:42:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:59.457 15:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:59.457 15:42:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.457 rmmod nvme_tcp 00:32:59.457 rmmod nvme_fabrics 00:32:59.457 rmmod nvme_keyring 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2411470 ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2411470 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2411470 ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2411470 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411470 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411470' 00:32:59.457 killing process with pid 2411470 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2411470 00:32:59.457 15:42:03 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2411470 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.361 15:42:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.361 15:42:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:01.361 15:42:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.267 15:42:06 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:03.267 00:33:03.267 real 0m21.871s 00:33:03.267 user 0m27.106s 00:33:03.267 sys 0m6.161s 00:33:03.267 15:42:06 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.267 15:42:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.267 ************************************ 00:33:03.267 END TEST nvmf_identify_passthru 00:33:03.267 ************************************ 00:33:03.267 15:42:06 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:03.267 15:42:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:03.267 15:42:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.267 15:42:06 -- common/autotest_common.sh@10 -- # set +x 00:33:03.267 ************************************ 00:33:03.267 START TEST nvmf_dif 00:33:03.267 ************************************ 00:33:03.267 15:42:06 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:03.267 * Looking for test storage... 00:33:03.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:03.267 15:42:07 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:03.267 15:42:07 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:03.267 15:42:07 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:03.267 15:42:07 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:03.267 15:42:07 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.267 15:42:07 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.267 15:42:07 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.267 15:42:07 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.268 --rc genhtml_branch_coverage=1 00:33:03.268 --rc genhtml_function_coverage=1 00:33:03.268 --rc genhtml_legend=1 00:33:03.268 --rc geninfo_all_blocks=1 00:33:03.268 --rc geninfo_unexecuted_blocks=1 00:33:03.268 00:33:03.268 ' 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.268 --rc genhtml_branch_coverage=1 00:33:03.268 --rc genhtml_function_coverage=1 00:33:03.268 --rc genhtml_legend=1 00:33:03.268 --rc geninfo_all_blocks=1 00:33:03.268 --rc geninfo_unexecuted_blocks=1 00:33:03.268 00:33:03.268 ' 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.268 --rc genhtml_branch_coverage=1 00:33:03.268 --rc genhtml_function_coverage=1 00:33:03.268 --rc genhtml_legend=1 00:33:03.268 --rc geninfo_all_blocks=1 00:33:03.268 --rc geninfo_unexecuted_blocks=1 00:33:03.268 00:33:03.268 ' 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.268 --rc genhtml_branch_coverage=1 00:33:03.268 --rc genhtml_function_coverage=1 00:33:03.268 --rc genhtml_legend=1 00:33:03.268 --rc geninfo_all_blocks=1 00:33:03.268 --rc geninfo_unexecuted_blocks=1 00:33:03.268 00:33:03.268 ' 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.268 15:42:07 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.268 15:42:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.268 15:42:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.268 15:42:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.268 15:42:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:03.268 15:42:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:03.268 15:42:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:03.268 15:42:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.268 15:42:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:03.269 15:42:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.269 15:42:07 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:03.269 15:42:07 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:03.269 15:42:07 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.269 15:42:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:09.835 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:09.835 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:09.835 Found net devices under 0000:86:00.0: cvl_0_0 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:09.835 Found net devices under 0000:86:00.1: cvl_0_1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:33:09.835 00:33:09.835 --- 10.0.0.2 ping statistics --- 00:33:09.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.835 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:33:09.835 00:33:09.835 --- 10.0.0.1 ping statistics --- 00:33:09.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.835 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:09.835 15:42:12 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:12.371 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:12.371 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:12.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.371 15:42:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:12.371 15:42:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2417453 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:12.371 15:42:15 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2417453 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2417453 ']' 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.371 15:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 [2024-11-20 15:42:15.944852] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:33:12.371 [2024-11-20 15:42:15.944896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.371 [2024-11-20 15:42:16.020884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.371 [2024-11-20 15:42:16.062104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.371 [2024-11-20 15:42:16.062143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.371 [2024-11-20 15:42:16.062150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.371 [2024-11-20 15:42:16.062156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.371 [2024-11-20 15:42:16.062161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.371 [2024-11-20 15:42:16.062748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:12.371 15:42:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 15:42:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.371 15:42:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:12.371 15:42:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 [2024-11-20 15:42:16.206040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.371 15:42:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 ************************************ 00:33:12.371 START TEST fio_dif_1_default 00:33:12.371 ************************************ 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 bdev_null0 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.371 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 [2024-11-20 15:42:16.278366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.632 { 00:33:12.632 "params": { 00:33:12.632 "name": "Nvme$subsystem", 00:33:12.632 "trtype": "$TEST_TRANSPORT", 00:33:12.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.632 "adrfam": "ipv4", 00:33:12.632 "trsvcid": "$NVMF_PORT", 00:33:12.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.632 "hdgst": ${hdgst:-false}, 00:33:12.632 "ddgst": ${ddgst:-false} 00:33:12.632 }, 00:33:12.632 "method": "bdev_nvme_attach_controller" 00:33:12.632 } 00:33:12.632 EOF 00:33:12.632 )") 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:12.632 "params": { 00:33:12.632 "name": "Nvme0", 00:33:12.632 "trtype": "tcp", 00:33:12.632 "traddr": "10.0.0.2", 00:33:12.632 "adrfam": "ipv4", 00:33:12.632 "trsvcid": "4420", 00:33:12.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:12.632 "hdgst": false, 00:33:12.632 "ddgst": false 00:33:12.632 }, 00:33:12.632 "method": "bdev_nvme_attach_controller" 00:33:12.632 }' 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:12.632 15:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:12.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:12.892 fio-3.35 00:33:12.892 Starting 1 thread 00:33:25.103 00:33:25.103 filename0: (groupid=0, jobs=1): err= 0: pid=2417822: Wed Nov 20 15:42:27 2024 00:33:25.103 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10033msec) 00:33:25.103 slat (nsec): min=6050, max=32217, avg=6438.81, stdev=1162.21 00:33:25.103 clat (usec): min=40866, max=44214, avg=41783.05, stdev=432.96 00:33:25.103 lat (usec): min=40872, max=44246, avg=41789.49, stdev=433.21 00:33:25.103 clat percentiles (usec): 00:33:25.103 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:25.103 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:25.103 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:25.103 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:25.103 | 99.99th=[44303] 00:33:25.103 bw ( KiB/s): min= 352, max= 384, per=99.81%, avg=382.40, stdev= 7.16, samples=20 00:33:25.103 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:33:25.103 lat (msec) : 50=100.00% 00:33:25.103 cpu : usr=92.46%, sys=7.26%, ctx=15, majf=0, minf=0 00:33:25.103 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.103 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.103 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:25.103 00:33:25.103 Run status group 0 (all jobs): 00:33:25.103 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10033-10033msec 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 00:33:25.103 real 0m11.154s 00:33:25.103 user 0m15.877s 00:33:25.103 sys 0m1.037s 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 ************************************ 00:33:25.103 END TEST fio_dif_1_default 00:33:25.103 ************************************ 00:33:25.103 15:42:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:25.103 15:42:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:25.103 15:42:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 ************************************ 00:33:25.103 START TEST fio_dif_1_multi_subsystems 00:33:25.103 ************************************ 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 bdev_null0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 [2024-11-20 15:42:27.500483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 bdev_null1 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.103 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.104 { 00:33:25.104 "params": { 00:33:25.104 "name": "Nvme$subsystem", 00:33:25.104 "trtype": "$TEST_TRANSPORT", 00:33:25.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.104 "adrfam": "ipv4", 00:33:25.104 "trsvcid": "$NVMF_PORT", 00:33:25.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.104 "hdgst": ${hdgst:-false}, 00:33:25.104 "ddgst": ${ddgst:-false} 00:33:25.104 }, 00:33:25.104 "method": "bdev_nvme_attach_controller" 00:33:25.104 } 00:33:25.104 EOF 00:33:25.104 )") 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.104 { 00:33:25.104 "params": { 00:33:25.104 "name": "Nvme$subsystem", 00:33:25.104 "trtype": "$TEST_TRANSPORT", 00:33:25.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.104 "adrfam": "ipv4", 00:33:25.104 "trsvcid": "$NVMF_PORT", 00:33:25.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.104 "hdgst": ${hdgst:-false}, 00:33:25.104 "ddgst": ${ddgst:-false} 00:33:25.104 }, 00:33:25.104 "method": "bdev_nvme_attach_controller" 00:33:25.104 } 00:33:25.104 EOF 00:33:25.104 )") 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.104 "params": { 00:33:25.104 "name": "Nvme0", 00:33:25.104 "trtype": "tcp", 00:33:25.104 "traddr": "10.0.0.2", 00:33:25.104 "adrfam": "ipv4", 00:33:25.104 "trsvcid": "4420", 00:33:25.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.104 "hdgst": false, 00:33:25.104 "ddgst": false 00:33:25.104 }, 00:33:25.104 "method": "bdev_nvme_attach_controller" 00:33:25.104 },{ 00:33:25.104 "params": { 00:33:25.104 "name": "Nvme1", 00:33:25.104 "trtype": "tcp", 00:33:25.104 "traddr": "10.0.0.2", 00:33:25.104 "adrfam": "ipv4", 00:33:25.104 "trsvcid": "4420", 00:33:25.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.104 "hdgst": false, 00:33:25.104 "ddgst": false 00:33:25.104 }, 00:33:25.104 "method": "bdev_nvme_attach_controller" 00:33:25.104 }' 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:25.104 15:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.104 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:25.104 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:25.104 fio-3.35 00:33:25.104 Starting 2 threads 00:33:35.083 00:33:35.083 filename0: (groupid=0, jobs=1): err= 0: pid=2419786: Wed Nov 20 15:42:38 2024 00:33:35.083 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:33:35.083 slat (nsec): min=6013, max=37839, avg=7752.29, stdev=2644.81 00:33:35.083 clat (usec): min=40793, max=42846, avg=41000.94, stdev=180.49 00:33:35.083 lat (usec): min=40799, max=42884, avg=41008.69, stdev=181.17 00:33:35.083 clat percentiles (usec): 00:33:35.083 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:35.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:35.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:35.083 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:35.083 | 99.99th=[42730] 00:33:35.083 bw ( KiB/s): min= 384, max= 416, per=32.94%, avg=388.80, stdev=11.72, samples=20 00:33:35.083 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:35.083 lat (msec) : 50=100.00% 00:33:35.083 cpu : usr=97.05%, sys=2.69%, ctx=12, majf=0, minf=167 00:33:35.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:35.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.083 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:35.083 filename1: (groupid=0, jobs=1): err= 0: pid=2419787: Wed Nov 20 15:42:38 2024 00:33:35.083 read: IOPS=197, BW=789KiB/s (808kB/s)(7920KiB/10039msec) 00:33:35.083 slat (nsec): min=6010, max=44189, avg=7110.10, stdev=2169.02 00:33:35.083 clat (usec): min=384, max=42550, avg=20260.36, stdev=20364.08 00:33:35.083 lat (usec): min=390, max=42556, avg=20267.47, stdev=20363.49 00:33:35.083 clat percentiles (usec): 00:33:35.083 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 445], 00:33:35.083 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 693], 60.00th=[40633], 00:33:35.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:35.083 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:35.083 | 99.99th=[42730] 00:33:35.083 bw ( KiB/s): min= 704, max= 896, per=67.07%, avg=790.40, stdev=47.69, samples=20 00:33:35.083 iops : min= 176, max= 224, avg=197.60, stdev=11.92, samples=20 00:33:35.083 lat (usec) : 500=27.17%, 750=23.94%, 1000=0.20% 00:33:35.083 lat (msec) : 2=0.20%, 50=48.48% 00:33:35.083 cpu : usr=96.68%, sys=3.07%, ctx=13, majf=0, minf=105 00:33:35.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:35.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.083 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:35.083 00:33:35.083 Run status group 0 (all jobs): 00:33:35.083 READ: bw=1178KiB/s (1206kB/s), 390KiB/s-789KiB/s (399kB/s-808kB/s), io=11.5MiB (12.1MB), run=10010-10039msec 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.083 00:33:35.083 real 0m11.467s 00:33:35.083 user 0m26.621s 00:33:35.083 sys 0m1.002s 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:35.083 ************************************ 00:33:35.083 END TEST fio_dif_1_multi_subsystems 00:33:35.083 ************************************ 00:33:35.083 15:42:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:35.083 15:42:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:35.083 15:42:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.083 15:42:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 ************************************ 00:33:35.342 START TEST fio_dif_rand_params 00:33:35.342 ************************************ 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 bdev_null0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:35.342 [2024-11-20 15:42:39.037371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:35.342 { 00:33:35.342 "params": { 00:33:35.342 "name": "Nvme$subsystem", 00:33:35.342 "trtype": "$TEST_TRANSPORT", 00:33:35.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.342 "adrfam": "ipv4", 00:33:35.342 "trsvcid": "$NVMF_PORT", 00:33:35.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.342 "hdgst": ${hdgst:-false}, 00:33:35.342 "ddgst": ${ddgst:-false} 00:33:35.342 }, 00:33:35.342 "method": "bdev_nvme_attach_controller" 00:33:35.342 } 00:33:35.342 EOF 00:33:35.342 )") 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:35.342 "params": { 00:33:35.342 "name": "Nvme0", 00:33:35.342 "trtype": "tcp", 00:33:35.342 "traddr": "10.0.0.2", 00:33:35.342 "adrfam": "ipv4", 00:33:35.342 "trsvcid": "4420", 00:33:35.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.342 "hdgst": false, 00:33:35.342 "ddgst": false 00:33:35.342 }, 00:33:35.342 "method": "bdev_nvme_attach_controller" 00:33:35.342 }' 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:35.342 15:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.600 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:35.600 ... 00:33:35.600 fio-3.35 00:33:35.600 Starting 3 threads 00:33:42.167 00:33:42.167 filename0: (groupid=0, jobs=1): err= 0: pid=2421749: Wed Nov 20 15:42:44 2024 00:33:42.167 read: IOPS=319, BW=40.0MiB/s (41.9MB/s)(202MiB/5047msec) 00:33:42.167 slat (nsec): min=6207, max=25670, avg=10588.82, stdev=1819.56 00:33:42.167 clat (usec): min=3459, max=52037, avg=9340.40, stdev=5800.60 00:33:42.167 lat (usec): min=3465, max=52044, avg=9350.99, stdev=5800.54 00:33:42.167 clat percentiles (usec): 00:33:42.167 | 1.00th=[ 5276], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7570], 00:33:42.167 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:33:42.167 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10683], 00:33:42.167 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[52167], 00:33:42.167 | 99.99th=[52167] 00:33:42.167 bw ( KiB/s): min=19456, max=47616, per=34.93%, avg=41267.20, stdev=8333.71, samples=10 00:33:42.167 iops : min= 152, max= 372, avg=322.40, stdev=65.11, samples=10 00:33:42.167 lat (msec) : 4=0.56%, 10=87.42%, 20=9.85%, 50=2.11%, 100=0.06% 00:33:42.167 cpu : usr=94.57%, sys=5.13%, ctx=10, majf=0, minf=53 00:33:42.167 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:42.167 filename0: (groupid=0, jobs=1): err= 0: pid=2421750: Wed Nov 20 15:42:44 2024 00:33:42.167 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(193MiB/5004msec) 00:33:42.167 slat (nsec): min=6229, max=26654, avg=10835.10, stdev=1776.62 00:33:42.167 clat (usec): min=3517, max=49998, avg=9733.39, stdev=5346.19 00:33:42.167 lat (usec): min=3525, max=50010, avg=9744.22, stdev=5346.20 00:33:42.167 clat percentiles (usec): 00:33:42.167 | 1.00th=[ 5342], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7898], 00:33:42.167 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:42.167 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11600], 00:33:42.167 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:33:42.167 | 99.99th=[50070] 00:33:42.167 bw ( KiB/s): min=34048, max=44032, per=32.84%, avg=38798.22, stdev=3602.49, samples=9 00:33:42.167 iops : min= 266, max= 344, avg=303.11, stdev=28.14, samples=9 00:33:42.167 lat (msec) : 4=0.26%, 10=68.31%, 20=29.68%, 50=1.75% 00:33:42.167 cpu : usr=94.60%, sys=5.10%, ctx=9, majf=0, minf=22 00:33:42.167 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:42.167 filename0: (groupid=0, jobs=1): err= 0: pid=2421751: Wed Nov 20 15:42:44 2024 00:33:42.167 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(188MiB/5003msec) 00:33:42.167 slat (nsec): min=6313, max=38332, avg=10892.79, stdev=2037.94 00:33:42.167 clat (usec): min=3430, max=89165, avg=9966.13, stdev=5310.12 00:33:42.167 lat (usec): min=3437, max=89178, avg=9977.02, stdev=5310.19 00:33:42.167 clat percentiles (usec): 00:33:42.167 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 8160], 00:33:42.167 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:33:42.167 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:33:42.167 | 99.00th=[45351], 99.50th=[49021], 99.90th=[51643], 99.95th=[89654], 00:33:42.167 | 99.99th=[89654] 00:33:42.167 bw ( KiB/s): min=30464, max=41984, per=32.82%, avg=38769.78, stdev=3570.51, samples=9 00:33:42.167 iops : min= 238, max= 328, avg=302.89, stdev=27.89, samples=9 00:33:42.167 lat (msec) : 4=1.46%, 10=57.65%, 20=39.36%, 50=1.33%, 100=0.20% 00:33:42.167 cpu : usr=93.98%, sys=5.74%, ctx=7, majf=0, minf=66 00:33:42.167 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:42.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.167 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:42.167 00:33:42.167 Run status group 0 (all jobs): 00:33:42.167 READ: bw=115MiB/s (121MB/s), 37.6MiB/s-40.0MiB/s (39.4MB/s-41.9MB/s), io=582MiB (611MB), run=5003-5047msec 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 bdev_null0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 [2024-11-20 15:42:45.152171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 bdev_null1 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.167 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 bdev_null2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.168 { 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme$subsystem", 00:33:42.168 "trtype": "$TEST_TRANSPORT", 00:33:42.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "$NVMF_PORT", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.168 "hdgst": ${hdgst:-false}, 00:33:42.168 "ddgst": ${ddgst:-false} 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 } 00:33:42.168 EOF 00:33:42.168 )") 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.168 { 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme$subsystem", 00:33:42.168 "trtype": "$TEST_TRANSPORT", 00:33:42.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "$NVMF_PORT", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.168 "hdgst": ${hdgst:-false}, 00:33:42.168 "ddgst": ${ddgst:-false} 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 } 00:33:42.168 EOF 00:33:42.168 )") 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.168 { 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme$subsystem", 00:33:42.168 "trtype": "$TEST_TRANSPORT", 00:33:42.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "$NVMF_PORT", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.168 "hdgst": ${hdgst:-false}, 00:33:42.168 "ddgst": ${ddgst:-false} 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 } 00:33:42.168 EOF 00:33:42.168 )") 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme0", 00:33:42.168 "trtype": "tcp", 00:33:42.168 "traddr": "10.0.0.2", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "4420", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.168 "hdgst": false, 00:33:42.168 "ddgst": false 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 },{ 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme1", 00:33:42.168 "trtype": "tcp", 00:33:42.168 "traddr": "10.0.0.2", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "4420", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.168 "hdgst": false, 00:33:42.168 "ddgst": false 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 },{ 00:33:42.168 "params": { 00:33:42.168 "name": "Nvme2", 00:33:42.168 "trtype": "tcp", 00:33:42.168 "traddr": "10.0.0.2", 00:33:42.168 "adrfam": "ipv4", 00:33:42.168 "trsvcid": "4420", 00:33:42.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:42.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:42.168 "hdgst": false, 00:33:42.168 "ddgst": false 00:33:42.168 }, 00:33:42.168 "method": "bdev_nvme_attach_controller" 00:33:42.168 }' 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:42.168 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:42.169 15:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:42.169 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:42.169 ... 00:33:42.169 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:42.169 ... 00:33:42.169 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:42.169 ... 00:33:42.169 fio-3.35 00:33:42.169 Starting 24 threads 00:33:54.367 00:33:54.367 filename0: (groupid=0, jobs=1): err= 0: pid=2422800: Wed Nov 20 15:42:56 2024 00:33:54.367 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:33:54.367 slat (nsec): min=7201, max=50126, avg=15002.06, stdev=7190.05 00:33:54.367 clat (usec): min=9211, max=30246, avg=27899.79, stdev=1586.99 00:33:54.367 lat (usec): min=9244, max=30258, avg=27914.79, stdev=1586.16 00:33:54.367 clat percentiles (usec): 00:33:54.367 | 1.00th=[15795], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.367 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.367 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.367 | 99.00th=[28705], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:33:54.367 | 99.99th=[30278] 00:33:54.367 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:54.367 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:54.367 lat (msec) : 10=0.28%, 20=0.84%, 50=98.88% 00:33:54.367 cpu : usr=98.54%, sys=1.11%, ctx=8, majf=0, minf=9 00:33:54.367 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.367 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.367 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.367 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.367 filename0: (groupid=0, jobs=1): err= 0: pid=2422801: Wed Nov 20 15:42:56 2024 00:33:54.367 read: IOPS=574, BW=2300KiB/s (2355kB/s)(22.5MiB/10002msec) 00:33:54.367 slat (nsec): min=5988, max=45218, avg=14065.72, stdev=6581.87 00:33:54.367 clat (usec): min=10770, max=39918, avg=27724.18, stdev=1885.53 00:33:54.367 lat (usec): min=10783, max=39936, avg=27738.25, stdev=1886.17 00:33:54.367 clat percentiles (usec): 00:33:54.367 | 1.00th=[17171], 5.00th=[27395], 10.00th=[27657], 20.00th=[27919], 00:33:54.367 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.367 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.367 | 99.00th=[28705], 99.50th=[28967], 99.90th=[39584], 99.95th=[39584], 00:33:54.367 | 99.99th=[40109] 00:33:54.367 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2299.79, stdev=93.73, samples=19 00:33:54.367 iops : min= 544, max= 640, avg=574.95, stdev=23.43, samples=19 00:33:54.367 lat (msec) : 20=2.19%, 50=97.81% 00:33:54.367 cpu : usr=98.34%, sys=1.31%, ctx=62, majf=0, minf=9 00:33:54.367 IO depths : 1=3.5%, 2=9.5%, 4=24.2%, 8=53.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:33:54.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.367 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.367 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.367 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.367 filename0: (groupid=0, jobs=1): err= 0: pid=2422802: Wed Nov 20 15:42:56 2024 00:33:54.367 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:33:54.367 slat (nsec): min=7470, max=48222, avg=20861.77, stdev=7250.79 00:33:54.367 clat (usec): min=9108, max=30188, avg=27851.29, stdev=1580.52 00:33:54.367 lat (usec): min=9130, max=30214, avg=27872.15, stdev=1580.25 00:33:54.367 clat percentiles (usec): 00:33:54.367 | 1.00th=[15795], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.367 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.367 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.367 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30278], 00:33:54.368 | 99.99th=[30278] 00:33:54.368 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:54.368 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:54.368 lat (msec) : 10=0.28%, 20=0.84%, 50=98.88% 00:33:54.368 cpu : usr=98.18%, sys=1.47%, ctx=18, majf=0, minf=9 00:33:54.368 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename0: (groupid=0, jobs=1): err= 0: pid=2422803: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10006msec) 00:33:54.368 slat (nsec): min=5520, max=46581, avg=19206.72, stdev=6074.34 00:33:54.368 clat (usec): min=6440, max=55425, avg=27980.79, stdev=2119.56 00:33:54.368 lat (usec): min=6452, max=55441, avg=28000.00, stdev=2119.34 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[22938], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.368 | 99.00th=[28967], 99.50th=[30802], 99.90th=[55313], 99.95th=[55313], 00:33:54.368 | 99.99th=[55313] 00:33:54.368 bw ( KiB/s): min= 2048, max= 2432, per=4.14%, avg=2268.00, stdev=85.91, samples=20 00:33:54.368 iops : min= 512, max= 608, avg=567.00, stdev=21.48, samples=20 00:33:54.368 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:33:54.368 cpu : usr=98.16%, sys=1.49%, ctx=16, majf=0, minf=9 00:33:54.368 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename0: (groupid=0, jobs=1): err= 0: pid=2422804: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10004msec) 00:33:54.368 slat (nsec): min=4036, max=45339, avg=21976.23, stdev=6299.38 00:33:54.368 clat (usec): min=16228, max=44388, avg=27983.00, stdev=1174.21 00:33:54.368 lat (usec): min=16244, max=44401, avg=28004.98, stdev=1173.65 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.368 | 99.00th=[28705], 99.50th=[29230], 99.90th=[44303], 99.95th=[44303], 00:33:54.368 | 99.99th=[44303] 00:33:54.368 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2263.58, stdev=61.13, samples=19 00:33:54.368 iops : min= 544, max= 576, avg=565.89, stdev=15.28, samples=19 00:33:54.368 lat (msec) : 20=0.28%, 50=99.72% 00:33:54.368 cpu : usr=98.53%, sys=1.13%, ctx=14, majf=0, minf=9 00:33:54.368 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename0: (groupid=0, jobs=1): err= 0: pid=2422805: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=583, BW=2333KiB/s (2389kB/s)(22.8MiB/10013msec) 00:33:54.368 slat (nsec): min=6178, max=48055, avg=11892.16, stdev=4296.18 00:33:54.368 clat (usec): min=2084, max=34976, avg=27327.08, stdev=4054.01 00:33:54.368 lat (usec): min=2095, max=34985, avg=27338.97, stdev=4054.02 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[ 2442], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:33:54.368 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:33:54.368 | 99.99th=[34866] 00:33:54.368 bw ( KiB/s): min= 2176, max= 3456, per=4.26%, avg=2329.60, stdev=271.05, samples=20 00:33:54.368 iops : min= 544, max= 864, avg=582.40, stdev=67.76, samples=20 00:33:54.368 lat (msec) : 4=2.04%, 10=0.43%, 20=1.10%, 50=96.44% 00:33:54.368 cpu : usr=98.17%, sys=1.47%, ctx=26, majf=0, minf=9 00:33:54.368 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename0: (groupid=0, jobs=1): err= 0: pid=2422806: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:33:54.368 slat (nsec): min=7643, max=47919, avg=22701.73, stdev=6932.92 00:33:54.368 clat (usec): min=8647, max=30131, avg=27827.01, stdev=1573.85 00:33:54.368 lat (usec): min=8660, max=30155, avg=27849.71, stdev=1574.35 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[16057], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.368 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:33:54.368 | 99.99th=[30016] 00:33:54.368 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:54.368 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:54.368 lat (msec) : 10=0.16%, 20=0.91%, 50=98.93% 00:33:54.368 cpu : usr=98.43%, sys=1.22%, ctx=18, majf=0, minf=9 00:33:54.368 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename0: (groupid=0, jobs=1): err= 0: pid=2422807: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.3MiB/10011msec) 00:33:54.368 slat (nsec): min=4215, max=54252, avg=22010.38, stdev=7664.84 00:33:54.368 clat (usec): min=12758, max=44747, avg=27907.80, stdev=1590.72 00:33:54.368 lat (usec): min=12766, max=44760, avg=27929.81, stdev=1591.56 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[20579], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.368 | 99.00th=[30016], 99.50th=[36439], 99.90th=[43779], 99.95th=[43779], 00:33:54.368 | 99.99th=[44827] 00:33:54.368 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2272.10, stdev=52.72, samples=20 00:33:54.368 iops : min= 544, max= 576, avg=568.00, stdev=13.17, samples=20 00:33:54.368 lat (msec) : 20=0.91%, 50=99.09% 00:33:54.368 cpu : usr=98.25%, sys=1.38%, ctx=19, majf=0, minf=9 00:33:54.368 IO depths : 1=5.4%, 2=11.5%, 4=24.5%, 8=51.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename1: (groupid=0, jobs=1): err= 0: pid=2422808: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=583, BW=2334KiB/s (2390kB/s)(22.8MiB/10007msec) 00:33:54.368 slat (nsec): min=6420, max=62064, avg=11559.30, stdev=4566.28 00:33:54.368 clat (usec): min=1944, max=29264, avg=27313.12, stdev=4112.06 00:33:54.368 lat (usec): min=1954, max=29276, avg=27324.68, stdev=4111.38 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[ 2573], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.368 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:33:54.368 | 99.99th=[29230] 00:33:54.368 bw ( KiB/s): min= 2176, max= 3456, per=4.26%, avg=2329.60, stdev=274.22, samples=20 00:33:54.368 iops : min= 544, max= 864, avg=582.40, stdev=68.55, samples=20 00:33:54.368 lat (msec) : 2=0.03%, 4=2.16%, 10=0.27%, 20=1.10%, 50=96.44% 00:33:54.368 cpu : usr=98.32%, sys=1.34%, ctx=12, majf=0, minf=9 00:33:54.368 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.368 filename1: (groupid=0, jobs=1): err= 0: pid=2422809: Wed Nov 20 15:42:56 2024 00:33:54.368 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:33:54.368 slat (nsec): min=7620, max=49254, avg=17920.53, stdev=7537.37 00:33:54.368 clat (usec): min=11205, max=30153, avg=27875.98, stdev=1567.37 00:33:54.368 lat (usec): min=11233, max=30171, avg=27893.90, stdev=1566.92 00:33:54.368 clat percentiles (usec): 00:33:54.368 | 1.00th=[15795], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.368 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.368 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.368 | 99.00th=[28705], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:33:54.368 | 99.99th=[30278] 00:33:54.368 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:54.368 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:54.368 lat (msec) : 20=1.12%, 50=98.88% 00:33:54.368 cpu : usr=98.35%, sys=1.30%, ctx=14, majf=0, minf=9 00:33:54.368 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.368 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.368 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422810: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10011msec) 00:33:54.369 slat (nsec): min=7142, max=45337, avg=21993.50, stdev=6543.95 00:33:54.369 clat (usec): min=14958, max=48253, avg=28005.10, stdev=1556.05 00:33:54.369 lat (usec): min=14972, max=48265, avg=28027.09, stdev=1555.93 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[26346], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.369 | 99.00th=[29230], 99.50th=[34341], 99.90th=[47973], 99.95th=[47973], 00:33:54.369 | 99.99th=[48497] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2265.60, stdev=60.18, samples=20 00:33:54.369 iops : min= 544, max= 576, avg=566.40, stdev=15.05, samples=20 00:33:54.369 lat (msec) : 20=0.77%, 50=99.23% 00:33:54.369 cpu : usr=98.26%, sys=1.39%, ctx=12, majf=0, minf=9 00:33:54.369 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422811: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10002msec) 00:33:54.369 slat (nsec): min=7563, max=48015, avg=22177.08, stdev=6521.41 00:33:54.369 clat (usec): min=16254, max=41759, avg=27975.15, stdev=1063.60 00:33:54.369 lat (usec): min=16269, max=41772, avg=27997.33, stdev=1063.17 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.369 | 99.00th=[28705], 99.50th=[28967], 99.90th=[41681], 99.95th=[41681], 00:33:54.369 | 99.99th=[41681] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2263.58, stdev=61.13, samples=19 00:33:54.369 iops : min= 544, max= 576, avg=565.89, stdev=15.28, samples=19 00:33:54.369 lat (msec) : 20=0.28%, 50=99.72% 00:33:54.369 cpu : usr=98.35%, sys=1.31%, ctx=9, majf=0, minf=9 00:33:54.369 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422812: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10002msec) 00:33:54.369 slat (nsec): min=6029, max=46377, avg=22192.51, stdev=6776.03 00:33:54.369 clat (usec): min=14984, max=45799, avg=27966.83, stdev=1276.83 00:33:54.369 lat (usec): min=14998, max=45815, avg=27989.02, stdev=1276.85 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.369 | 99.00th=[28705], 99.50th=[29754], 99.90th=[45876], 99.95th=[45876], 00:33:54.369 | 99.99th=[45876] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2263.58, stdev=61.13, samples=19 00:33:54.369 iops : min= 544, max= 576, avg=565.89, stdev=15.28, samples=19 00:33:54.369 lat (msec) : 20=0.56%, 50=99.44% 00:33:54.369 cpu : usr=98.34%, sys=1.33%, ctx=12, majf=0, minf=9 00:33:54.369 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422813: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=568, BW=2275KiB/s (2330kB/s)(22.2MiB/10013msec) 00:33:54.369 slat (nsec): min=6055, max=34019, avg=14829.92, stdev=4497.33 00:33:54.369 clat (usec): min=15079, max=40956, avg=27992.44, stdev=796.32 00:33:54.369 lat (usec): min=15088, max=40970, avg=28007.27, stdev=796.52 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.369 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:33:54.369 | 99.99th=[41157] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:54.369 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:54.369 lat (msec) : 20=0.32%, 50=99.68% 00:33:54.369 cpu : usr=98.31%, sys=1.36%, ctx=5, majf=0, minf=9 00:33:54.369 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422814: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10002msec) 00:33:54.369 slat (nsec): min=7113, max=49016, avg=21856.16, stdev=6763.45 00:33:54.369 clat (usec): min=16233, max=41735, avg=27971.61, stdev=1154.01 00:33:54.369 lat (usec): min=16248, max=41748, avg=27993.47, stdev=1153.81 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.369 | 99.00th=[28967], 99.50th=[33162], 99.90th=[41681], 99.95th=[41681], 00:33:54.369 | 99.99th=[41681] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2263.58, stdev=57.35, samples=19 00:33:54.369 iops : min= 544, max= 576, avg=565.89, stdev=14.34, samples=19 00:33:54.369 lat (msec) : 20=0.28%, 50=99.72% 00:33:54.369 cpu : usr=98.40%, sys=1.26%, ctx=14, majf=0, minf=9 00:33:54.369 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename1: (groupid=0, jobs=1): err= 0: pid=2422815: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:33:54.369 slat (nsec): min=8930, max=51499, avg=23447.11, stdev=7213.03 00:33:54.369 clat (usec): min=9103, max=30136, avg=27816.41, stdev=1576.27 00:33:54.369 lat (usec): min=9116, max=30154, avg=27839.86, stdev=1576.53 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[15795], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.369 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30016], 00:33:54.369 | 99.99th=[30016] 00:33:54.369 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:54.369 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:54.369 lat (msec) : 10=0.28%, 20=0.84%, 50=98.88% 00:33:54.369 cpu : usr=98.38%, sys=1.28%, ctx=11, majf=0, minf=9 00:33:54.369 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename2: (groupid=0, jobs=1): err= 0: pid=2422816: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10006msec) 00:33:54.369 slat (nsec): min=6917, max=43173, avg=17995.96, stdev=6614.89 00:33:54.369 clat (usec): min=6397, max=55211, avg=27997.23, stdev=2196.99 00:33:54.369 lat (usec): min=6404, max=55237, avg=28015.22, stdev=2196.73 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[22414], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.369 | 99.00th=[28967], 99.50th=[40109], 99.90th=[55313], 99.95th=[55313], 00:33:54.369 | 99.99th=[55313] 00:33:54.369 bw ( KiB/s): min= 2048, max= 2432, per=4.14%, avg=2268.00, stdev=79.05, samples=20 00:33:54.369 iops : min= 512, max= 608, avg=567.00, stdev=19.76, samples=20 00:33:54.369 lat (msec) : 10=0.28%, 20=0.35%, 50=99.09%, 100=0.28% 00:33:54.369 cpu : usr=98.09%, sys=1.56%, ctx=16, majf=0, minf=9 00:33:54.369 IO depths : 1=3.5%, 2=9.7%, 4=24.7%, 8=53.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:33:54.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.369 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.369 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.369 filename2: (groupid=0, jobs=1): err= 0: pid=2422817: Wed Nov 20 15:42:56 2024 00:33:54.369 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10017msec) 00:33:54.369 slat (nsec): min=6246, max=50029, avg=20993.97, stdev=6668.18 00:33:54.369 clat (usec): min=19183, max=37991, avg=27966.93, stdev=794.33 00:33:54.369 lat (usec): min=19191, max=38008, avg=27987.92, stdev=794.62 00:33:54.369 clat percentiles (usec): 00:33:54.369 | 1.00th=[24511], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.369 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.369 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.370 | 99.00th=[28967], 99.50th=[29230], 99.90th=[33817], 99.95th=[33817], 00:33:54.370 | 99.99th=[38011] 00:33:54.370 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.40, stdev=56.37, samples=20 00:33:54.370 iops : min= 544, max= 576, avg=567.60, stdev=14.09, samples=20 00:33:54.370 lat (msec) : 20=0.25%, 50=99.75% 00:33:54.370 cpu : usr=98.30%, sys=1.36%, ctx=13, majf=0, minf=9 00:33:54.370 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422818: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:54.370 slat (usec): min=8, max=440, avg=57.07, stdev=13.79 00:33:54.370 clat (usec): min=13937, max=30084, avg=27587.51, stdev=1042.49 00:33:54.370 lat (usec): min=14087, max=30148, avg=27644.58, stdev=1037.00 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[26084], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:54.370 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:54.370 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:54.370 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29754], 99.95th=[30016], 00:33:54.370 | 99.99th=[30016] 00:33:54.370 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:54.370 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:54.370 lat (msec) : 20=0.56%, 50=99.44% 00:33:54.370 cpu : usr=98.68%, sys=0.93%, ctx=13, majf=0, minf=9 00:33:54.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422819: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=575, BW=2301KiB/s (2356kB/s)(22.5MiB/10008msec) 00:33:54.370 slat (nsec): min=5020, max=50575, avg=13141.28, stdev=7197.77 00:33:54.370 clat (usec): min=12737, max=53421, avg=27760.94, stdev=3209.88 00:33:54.370 lat (usec): min=12744, max=53436, avg=27774.08, stdev=3209.06 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[21103], 5.00th=[22152], 10.00th=[22938], 20.00th=[25297], 00:33:54.370 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:33:54.370 | 70.00th=[28181], 80.00th=[28181], 90.00th=[32637], 95.00th=[33817], 00:33:54.370 | 99.00th=[34341], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:33:54.370 | 99.99th=[53216] 00:33:54.370 bw ( KiB/s): min= 2160, max= 2336, per=4.19%, avg=2293.89, stdev=38.90, samples=19 00:33:54.370 iops : min= 540, max= 584, avg=573.47, stdev= 9.73, samples=19 00:33:54.370 lat (msec) : 20=0.64%, 50=99.32%, 100=0.03% 00:33:54.370 cpu : usr=98.16%, sys=1.49%, ctx=15, majf=0, minf=11 00:33:54.370 IO depths : 1=0.1%, 2=0.1%, 4=2.1%, 8=81.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422820: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10008msec) 00:33:54.370 slat (nsec): min=7572, max=45676, avg=20813.12, stdev=6627.71 00:33:54.370 clat (usec): min=16251, max=48186, avg=28017.21, stdev=1339.93 00:33:54.370 lat (usec): min=16265, max=48199, avg=28038.02, stdev=1339.25 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.370 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.370 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:33:54.370 | 99.00th=[28967], 99.50th=[29230], 99.90th=[47973], 99.95th=[47973], 00:33:54.370 | 99.99th=[47973] 00:33:54.370 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2265.60, stdev=60.18, samples=20 00:33:54.370 iops : min= 544, max= 576, avg=566.40, stdev=15.05, samples=20 00:33:54.370 lat (msec) : 20=0.28%, 50=99.72% 00:33:54.370 cpu : usr=98.48%, sys=1.18%, ctx=13, majf=0, minf=9 00:33:54.370 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422821: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=568, BW=2276KiB/s (2331kB/s)(22.2MiB/10011msec) 00:33:54.370 slat (nsec): min=7457, max=54388, avg=22772.60, stdev=6772.03 00:33:54.370 clat (usec): min=15667, max=30136, avg=27925.83, stdev=832.81 00:33:54.370 lat (usec): min=15682, max=30163, avg=27948.61, stdev=833.24 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.370 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.370 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.370 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30016], 00:33:54.370 | 99.99th=[30016] 00:33:54.370 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2272.00, stdev=56.87, samples=20 00:33:54.370 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:33:54.370 lat (msec) : 20=0.56%, 50=99.44% 00:33:54.370 cpu : usr=98.45%, sys=1.21%, ctx=15, majf=0, minf=9 00:33:54.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422822: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=568, BW=2275KiB/s (2330kB/s)(22.2MiB/10006msec) 00:33:54.370 slat (nsec): min=5918, max=47189, avg=21224.72, stdev=7463.46 00:33:54.370 clat (usec): min=8854, max=45684, avg=27930.40, stdev=1885.19 00:33:54.370 lat (usec): min=8861, max=45700, avg=27951.63, stdev=1885.40 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[22414], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:54.370 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:54.370 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:54.370 | 99.00th=[30016], 99.50th=[33817], 99.90th=[45876], 99.95th=[45876], 00:33:54.370 | 99.99th=[45876] 00:33:54.370 bw ( KiB/s): min= 2176, max= 2544, per=4.16%, avg=2276.00, stdev=86.22, samples=20 00:33:54.370 iops : min= 544, max= 636, avg=569.00, stdev=21.56, samples=20 00:33:54.370 lat (msec) : 10=0.28%, 20=0.67%, 50=99.05% 00:33:54.370 cpu : usr=98.45%, sys=1.21%, ctx=10, majf=0, minf=9 00:33:54.370 IO depths : 1=5.4%, 2=11.1%, 4=22.6%, 8=53.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 filename2: (groupid=0, jobs=1): err= 0: pid=2422823: Wed Nov 20 15:42:56 2024 00:33:54.370 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10004msec) 00:33:54.370 slat (nsec): min=7768, max=78830, avg=55129.32, stdev=5750.36 00:33:54.370 clat (usec): min=8812, max=55360, avg=27691.41, stdev=1869.66 00:33:54.370 lat (usec): min=8821, max=55406, avg=27746.54, stdev=1870.52 00:33:54.370 clat percentiles (usec): 00:33:54.370 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:54.370 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:54.370 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:54.370 | 99.00th=[28705], 99.50th=[28705], 99.90th=[55313], 99.95th=[55313], 00:33:54.370 | 99.99th=[55313] 00:33:54.370 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2263.79, stdev=73.91, samples=19 00:33:54.370 iops : min= 513, max= 576, avg=565.95, stdev=18.48, samples=19 00:33:54.370 lat (msec) : 10=0.28%, 20=0.28%, 50=99.15%, 100=0.28% 00:33:54.370 cpu : usr=98.57%, sys=1.05%, ctx=13, majf=0, minf=9 00:33:54.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:54.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.370 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:54.370 00:33:54.370 Run status group 0 (all jobs): 00:33:54.370 READ: bw=53.4MiB/s (56.0MB/s), 2270KiB/s-2334KiB/s (2324kB/s-2390kB/s), io=535MiB (561MB), run=10002-10017msec 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.370 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 bdev_null0 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 [2024-11-20 15:42:56.905544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 bdev_null1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.371 { 00:33:54.371 "params": { 00:33:54.371 "name": "Nvme$subsystem", 00:33:54.371 "trtype": "$TEST_TRANSPORT", 00:33:54.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.371 "adrfam": "ipv4", 00:33:54.371 "trsvcid": "$NVMF_PORT", 00:33:54.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.371 "hdgst": ${hdgst:-false}, 00:33:54.371 "ddgst": ${ddgst:-false} 00:33:54.371 }, 00:33:54.371 "method": "bdev_nvme_attach_controller" 00:33:54.371 } 00:33:54.371 EOF 00:33:54.371 )") 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.371 { 00:33:54.371 "params": { 00:33:54.371 "name": "Nvme$subsystem", 00:33:54.371 "trtype": "$TEST_TRANSPORT", 00:33:54.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.371 "adrfam": "ipv4", 00:33:54.371 "trsvcid": "$NVMF_PORT", 00:33:54.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.371 "hdgst": ${hdgst:-false}, 00:33:54.371 "ddgst": ${ddgst:-false} 00:33:54.371 }, 00:33:54.371 "method": "bdev_nvme_attach_controller" 00:33:54.371 } 00:33:54.371 EOF 00:33:54.371 )") 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:54.371 15:42:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.372 "params": { 00:33:54.372 "name": "Nvme0", 00:33:54.372 "trtype": "tcp", 00:33:54.372 "traddr": "10.0.0.2", 00:33:54.372 "adrfam": "ipv4", 00:33:54.372 "trsvcid": "4420", 00:33:54.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.372 "hdgst": false, 00:33:54.372 "ddgst": false 00:33:54.372 }, 00:33:54.372 "method": "bdev_nvme_attach_controller" 00:33:54.372 },{ 00:33:54.372 "params": { 00:33:54.372 "name": "Nvme1", 00:33:54.372 "trtype": "tcp", 00:33:54.372 "traddr": "10.0.0.2", 00:33:54.372 "adrfam": "ipv4", 00:33:54.372 "trsvcid": "4420", 00:33:54.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.372 "hdgst": false, 00:33:54.372 "ddgst": false 00:33:54.372 }, 00:33:54.372 "method": "bdev_nvme_attach_controller" 00:33:54.372 }' 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:54.372 15:42:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.372 15:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.372 15:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.372 15:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:54.372 15:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.372 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:54.372 ... 00:33:54.372 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:54.372 ... 00:33:54.372 fio-3.35 00:33:54.372 Starting 4 threads 00:33:59.639 00:33:59.639 filename0: (groupid=0, jobs=1): err= 0: pid=2424767: Wed Nov 20 15:43:03 2024 00:33:59.639 read: IOPS=2877, BW=22.5MiB/s (23.6MB/s)(112MiB/5002msec) 00:33:59.639 slat (nsec): min=6125, max=68528, avg=11503.72, stdev=6324.95 00:33:59.639 clat (usec): min=860, max=5578, avg=2743.87, stdev=416.33 00:33:59.639 lat (usec): min=871, max=5598, avg=2755.37, stdev=416.85 00:33:59.639 clat percentiles (usec): 00:33:59.639 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:33:59.639 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2868], 00:33:59.639 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3392], 00:33:59.639 | 99.00th=[ 3916], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 5080], 00:33:59.639 | 99.99th=[ 5538] 00:33:59.639 bw ( KiB/s): min=21808, max=24080, per=27.34%, avg=23027.10, stdev=827.59, samples=10 00:33:59.639 iops : min= 2726, max= 3010, avg=2878.30, stdev=103.38, samples=10 00:33:59.639 lat (usec) : 1000=0.02% 00:33:59.639 lat (msec) : 2=2.84%, 4=96.30%, 10=0.84% 00:33:59.639 cpu : usr=97.26%, sys=2.40%, ctx=7, majf=0, minf=9 00:33:59.639 IO depths : 1=0.4%, 2=11.4%, 4=59.1%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 issued rwts: total=14395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.639 filename0: (groupid=0, jobs=1): err= 0: pid=2424768: Wed Nov 20 15:43:03 2024 00:33:59.639 read: IOPS=2481, BW=19.4MiB/s (20.3MB/s)(97.0MiB/5001msec) 00:33:59.639 slat (nsec): min=6122, max=68598, avg=11071.94, stdev=6752.75 00:33:59.639 clat (usec): min=992, max=5940, avg=3190.51, stdev=507.79 00:33:59.639 lat (usec): min=1003, max=5954, avg=3201.59, stdev=507.27 00:33:59.639 clat percentiles (usec): 00:33:59.639 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:33:59.639 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3163], 00:33:59.639 | 70.00th=[ 3294], 80.00th=[ 3523], 90.00th=[ 3785], 95.00th=[ 4178], 00:33:59.639 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5669], 99.95th=[ 5735], 00:33:59.639 | 99.99th=[ 5932] 00:33:59.639 bw ( KiB/s): min=18981, max=21040, per=23.58%, avg=19865.44, stdev=665.43, samples=9 00:33:59.639 iops : min= 2372, max= 2630, avg=2483.11, stdev=83.28, samples=9 00:33:59.639 lat (usec) : 1000=0.01% 00:33:59.639 lat (msec) : 2=0.60%, 4=92.64%, 10=6.76% 00:33:59.639 cpu : usr=97.00%, sys=2.66%, ctx=12, majf=0, minf=9 00:33:59.639 IO depths : 1=0.2%, 2=2.9%, 4=69.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 issued rwts: total=12412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.639 filename1: (groupid=0, jobs=1): err= 0: pid=2424769: Wed Nov 20 15:43:03 2024 00:33:59.639 read: IOPS=2640, BW=20.6MiB/s (21.6MB/s)(103MiB/5001msec) 00:33:59.639 slat (nsec): min=6151, max=62959, avg=15285.66, stdev=10104.78 00:33:59.639 clat (usec): min=659, max=43712, avg=2983.80, stdev=1099.33 00:33:59.639 lat (usec): min=679, max=43740, avg=2999.09, stdev=1099.55 00:33:59.639 clat percentiles (usec): 00:33:59.639 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:33:59.639 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:33:59.639 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3752], 00:33:59.639 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[43779], 00:33:59.639 | 99.99th=[43779] 00:33:59.639 bw ( KiB/s): min=20256, max=22208, per=25.15%, avg=21182.22, stdev=685.92, samples=9 00:33:59.639 iops : min= 2532, max= 2776, avg=2647.78, stdev=85.74, samples=9 00:33:59.639 lat (usec) : 750=0.01%, 1000=0.03% 00:33:59.639 lat (msec) : 2=0.77%, 4=96.52%, 10=2.61%, 50=0.06% 00:33:59.639 cpu : usr=96.16%, sys=3.20%, ctx=61, majf=0, minf=9 00:33:59.639 IO depths : 1=0.2%, 2=7.9%, 4=62.1%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 issued rwts: total=13203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.639 filename1: (groupid=0, jobs=1): err= 0: pid=2424770: Wed Nov 20 15:43:03 2024 00:33:59.639 read: IOPS=2531, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5001msec) 00:33:59.639 slat (nsec): min=6100, max=68532, avg=11359.71, stdev=6909.47 00:33:59.639 clat (usec): min=835, max=5690, avg=3126.59, stdev=484.67 00:33:59.639 lat (usec): min=842, max=5699, avg=3137.94, stdev=484.19 00:33:59.639 clat percentiles (usec): 00:33:59.639 | 1.00th=[ 1991], 5.00th=[ 2376], 10.00th=[ 2638], 20.00th=[ 2835], 00:33:59.639 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:59.639 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 3982], 00:33:59.639 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5473], 00:33:59.639 | 99.99th=[ 5669] 00:33:59.639 bw ( KiB/s): min=19440, max=21328, per=24.12%, avg=20318.22, stdev=659.18, samples=9 00:33:59.639 iops : min= 2430, max= 2666, avg=2539.78, stdev=82.40, samples=9 00:33:59.639 lat (usec) : 1000=0.02% 00:33:59.639 lat (msec) : 2=1.00%, 4=94.09%, 10=4.88% 00:33:59.639 cpu : usr=97.44%, sys=2.24%, ctx=6, majf=0, minf=9 00:33:59.639 IO depths : 1=0.3%, 2=3.9%, 4=68.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.639 issued rwts: total=12658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:59.639 00:33:59.639 Run status group 0 (all jobs): 00:33:59.639 READ: bw=82.3MiB/s (86.3MB/s), 19.4MiB/s-22.5MiB/s (20.3MB/s-23.6MB/s), io=411MiB (431MB), run=5001-5002msec 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.639 00:33:59.639 real 0m24.306s 00:33:59.639 user 4m52.405s 00:33:59.639 sys 0m5.449s 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 ************************************ 00:33:59.639 END TEST fio_dif_rand_params 00:33:59.639 ************************************ 00:33:59.639 15:43:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:59.639 15:43:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:59.639 15:43:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.639 ************************************ 00:33:59.639 START TEST fio_dif_digest 00:33:59.639 ************************************ 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.639 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:59.640 bdev_null0 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:59.640 [2024-11-20 15:43:03.413494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.640 { 00:33:59.640 "params": { 00:33:59.640 "name": "Nvme$subsystem", 00:33:59.640 "trtype": "$TEST_TRANSPORT", 00:33:59.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.640 "adrfam": "ipv4", 00:33:59.640 "trsvcid": "$NVMF_PORT", 00:33:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.640 "hdgst": ${hdgst:-false}, 00:33:59.640 "ddgst": ${ddgst:-false} 00:33:59.640 }, 00:33:59.640 "method": "bdev_nvme_attach_controller" 00:33:59.640 } 00:33:59.640 EOF 00:33:59.640 )") 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.640 "params": { 00:33:59.640 "name": "Nvme0", 00:33:59.640 "trtype": "tcp", 00:33:59.640 "traddr": "10.0.0.2", 00:33:59.640 "adrfam": "ipv4", 00:33:59.640 "trsvcid": "4420", 00:33:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.640 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.640 "hdgst": true, 00:33:59.640 "ddgst": true 00:33:59.640 }, 00:33:59.640 "method": "bdev_nvme_attach_controller" 00:33:59.640 }' 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:59.640 15:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.898 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:59.898 ... 00:33:59.898 fio-3.35 00:33:59.898 Starting 3 threads 00:34:12.103 00:34:12.103 filename0: (groupid=0, jobs=1): err= 0: pid=2425884: Wed Nov 20 15:43:14 2024 00:34:12.103 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(381MiB/10046msec) 00:34:12.103 slat (nsec): min=6542, max=31426, avg=11047.56, stdev=1795.60 00:34:12.103 clat (usec): min=7340, max=52148, avg=9851.11, stdev=1807.94 00:34:12.103 lat (usec): min=7352, max=52180, avg=9862.16, stdev=1808.02 00:34:12.103 clat percentiles (usec): 00:34:12.103 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:34:12.103 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:34:12.103 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:12.103 | 99.00th=[11469], 99.50th=[11600], 99.90th=[50070], 99.95th=[52167], 00:34:12.103 | 99.99th=[52167] 00:34:12.103 bw ( KiB/s): min=36096, max=41216, per=36.06%, avg=39027.20, stdev=1167.17, samples=20 00:34:12.103 iops : min= 282, max= 322, avg=304.90, stdev= 9.12, samples=20 00:34:12.103 lat (msec) : 10=61.42%, 20=38.41%, 50=0.03%, 100=0.13% 00:34:12.103 cpu : usr=95.82%, sys=3.87%, ctx=18, majf=0, minf=27 00:34:12.103 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 issued rwts: total=3051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.103 filename0: (groupid=0, jobs=1): err= 0: pid=2425885: Wed Nov 20 15:43:14 2024 00:34:12.103 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(347MiB/10046msec) 00:34:12.103 slat (nsec): min=6548, max=30575, avg=12187.72, stdev=1717.02 00:34:12.103 clat (usec): min=6466, max=50787, avg=10834.85, stdev=1298.74 00:34:12.103 lat (usec): min=6482, max=50797, avg=10847.04, stdev=1298.68 00:34:12.103 clat percentiles (usec): 00:34:12.103 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:34:12.103 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:34:12.103 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:12.103 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13829], 99.95th=[45876], 00:34:12.103 | 99.99th=[50594] 00:34:12.103 bw ( KiB/s): min=34304, max=36608, per=32.78%, avg=35481.60, stdev=819.71, samples=20 00:34:12.103 iops : min= 268, max= 286, avg=277.20, stdev= 6.40, samples=20 00:34:12.103 lat (msec) : 10=14.53%, 20=85.40%, 50=0.04%, 100=0.04% 00:34:12.103 cpu : usr=95.62%, sys=4.07%, ctx=15, majf=0, minf=20 00:34:12.103 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 issued rwts: total=2774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.103 filename0: (groupid=0, jobs=1): err= 0: pid=2425886: Wed Nov 20 15:43:14 2024 00:34:12.103 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(334MiB/10046msec) 00:34:12.103 slat (nsec): min=6540, max=34421, avg=11197.79, stdev=1665.17 00:34:12.103 clat (usec): min=6935, max=48947, avg=11257.82, stdev=1283.35 00:34:12.103 lat (usec): min=6944, max=48960, avg=11269.02, stdev=1283.35 00:34:12.103 clat percentiles (usec): 00:34:12.103 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:34:12.103 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:34:12.103 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:12.103 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14091], 99.95th=[46400], 00:34:12.103 | 99.99th=[49021] 00:34:12.103 bw ( KiB/s): min=33024, max=35072, per=31.55%, avg=34150.40, stdev=640.13, samples=20 00:34:12.103 iops : min= 258, max= 274, avg=266.80, stdev= 5.00, samples=20 00:34:12.103 lat (msec) : 10=4.64%, 20=95.28%, 50=0.07% 00:34:12.103 cpu : usr=95.92%, sys=3.77%, ctx=21, majf=0, minf=28 00:34:12.103 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.103 issued rwts: total=2670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:12.103 00:34:12.103 Run status group 0 (all jobs): 00:34:12.103 READ: bw=106MiB/s (111MB/s), 33.2MiB/s-38.0MiB/s (34.8MB/s-39.8MB/s), io=1062MiB (1113MB), run=10046-10046msec 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.103 00:34:12.103 real 0m11.103s 00:34:12.103 user 0m35.348s 00:34:12.103 sys 0m1.501s 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.103 15:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.103 ************************************ 00:34:12.103 END TEST fio_dif_digest 00:34:12.103 ************************************ 00:34:12.103 15:43:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:12.103 15:43:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.103 15:43:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.104 rmmod nvme_tcp 00:34:12.104 rmmod nvme_fabrics 00:34:12.104 rmmod nvme_keyring 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2417453 ']' 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2417453 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2417453 ']' 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2417453 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2417453 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2417453' 00:34:12.104 killing process with pid 2417453 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2417453 00:34:12.104 15:43:14 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2417453 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:12.104 15:43:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:14.011 Waiting for block devices as requested 00:34:14.011 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:14.011 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.011 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.011 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.011 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.270 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.270 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.270 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.532 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:14.532 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:14.532 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:14.532 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.822 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.822 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.822 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.822 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.113 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.113 15:43:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.113 15:43:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.113 15:43:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.056 15:43:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.056 00:34:17.056 real 1m14.006s 00:34:17.056 user 7m9.818s 00:34:17.056 sys 0m20.806s 00:34:17.056 15:43:20 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.056 15:43:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:17.056 ************************************ 00:34:17.056 END TEST nvmf_dif 00:34:17.056 ************************************ 00:34:17.315 15:43:20 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:17.315 15:43:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.315 15:43:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.315 15:43:20 -- common/autotest_common.sh@10 -- # set +x 00:34:17.315 ************************************ 00:34:17.315 START TEST nvmf_abort_qd_sizes 00:34:17.315 ************************************ 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:17.315 * Looking for test storage... 00:34:17.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:17.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.315 --rc genhtml_branch_coverage=1 00:34:17.315 --rc genhtml_function_coverage=1 00:34:17.315 --rc genhtml_legend=1 00:34:17.315 --rc geninfo_all_blocks=1 00:34:17.315 --rc geninfo_unexecuted_blocks=1 00:34:17.315 00:34:17.315 ' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:17.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.315 --rc genhtml_branch_coverage=1 00:34:17.315 --rc genhtml_function_coverage=1 00:34:17.315 --rc genhtml_legend=1 00:34:17.315 --rc geninfo_all_blocks=1 00:34:17.315 --rc geninfo_unexecuted_blocks=1 00:34:17.315 00:34:17.315 ' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:17.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.315 --rc genhtml_branch_coverage=1 00:34:17.315 --rc genhtml_function_coverage=1 00:34:17.315 --rc genhtml_legend=1 00:34:17.315 --rc geninfo_all_blocks=1 00:34:17.315 --rc geninfo_unexecuted_blocks=1 00:34:17.315 00:34:17.315 ' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:17.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.315 --rc genhtml_branch_coverage=1 00:34:17.315 --rc genhtml_function_coverage=1 00:34:17.315 --rc genhtml_legend=1 00:34:17.315 --rc geninfo_all_blocks=1 00:34:17.315 --rc geninfo_unexecuted_blocks=1 00:34:17.315 00:34:17.315 ' 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.315 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:17.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.575 15:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:24.142 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:24.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:24.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:24.143 Found net devices under 0000:86:00.0: cvl_0_0 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:24.143 Found net devices under 0000:86:00.1: cvl_0_1 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:24.143 15:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:24.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:34:24.143 00:34:24.143 --- 10.0.0.2 ping statistics --- 00:34:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.143 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:24.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:24.143 00:34:24.143 --- 10.0.0.1 ping statistics --- 00:34:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.143 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:24.143 15:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:26.046 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:26.046 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:26.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:27.240 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2433850 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2433850 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2433850 ']' 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.240 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:27.240 [2024-11-20 15:43:31.116156] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:34:27.240 [2024-11-20 15:43:31.116207] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.498 [2024-11-20 15:43:31.199647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:27.498 [2024-11-20 15:43:31.241447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.498 [2024-11-20 15:43:31.241488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.498 [2024-11-20 15:43:31.241495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.498 [2024-11-20 15:43:31.241501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.498 [2024-11-20 15:43:31.241506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.498 [2024-11-20 15:43:31.242908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.498 [2024-11-20 15:43:31.243021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.498 [2024-11-20 15:43:31.243128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.498 [2024-11-20 15:43:31.243128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.498 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.498 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:27.498 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.498 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.499 15:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:27.757 ************************************ 00:34:27.757 START TEST spdk_target_abort 00:34:27.757 ************************************ 00:34:27.757 15:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:27.757 15:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:27.757 15:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:27.757 15:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.757 15:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.039 spdk_targetn1 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.039 [2024-11-20 15:43:34.266168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.039 [2024-11-20 15:43:34.300776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:31.039 15:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.568 Initializing NVMe Controllers 00:34:33.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:33.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:33.568 Initialization complete. Launching workers. 00:34:33.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15373, failed: 0 00:34:33.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1382, failed to submit 13991 00:34:33.568 success 726, unsuccessful 656, failed 0 00:34:33.568 15:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.568 15:43:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.855 Initializing NVMe Controllers 00:34:36.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.855 Initialization complete. Launching workers. 00:34:36.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8587, failed: 0 00:34:36.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7323 00:34:36.855 success 304, unsuccessful 960, failed 0 00:34:36.855 15:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.855 15:43:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.130 Initializing NVMe Controllers 00:34:40.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:40.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:40.130 Initialization complete. Launching workers. 00:34:40.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37769, failed: 0 00:34:40.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2746, failed to submit 35023 00:34:40.130 success 569, unsuccessful 2177, failed 0 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.130 15:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2433850 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2433850 ']' 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2433850 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433850 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433850' 00:34:41.502 killing process with pid 2433850 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2433850 00:34:41.502 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2433850 00:34:41.760 00:34:41.760 real 0m14.064s 00:34:41.760 user 0m53.623s 00:34:41.760 sys 0m2.608s 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:41.760 ************************************ 00:34:41.760 END TEST spdk_target_abort 00:34:41.760 ************************************ 00:34:41.760 15:43:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:41.760 15:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.760 15:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.760 15:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:41.760 ************************************ 00:34:41.760 START TEST kernel_target_abort 00:34:41.760 ************************************ 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:41.760 15:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:45.050 Waiting for block devices as requested 00:34:45.050 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:45.050 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.050 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.309 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.309 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.309 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.568 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.568 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.568 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.568 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.826 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.826 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:45.826 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:46.085 No valid GPT data, bailing 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:46.085 00:34:46.085 Discovery Log Number of Records 2, Generation counter 2 00:34:46.085 =====Discovery Log Entry 0====== 00:34:46.085 trtype: tcp 00:34:46.085 adrfam: ipv4 00:34:46.085 subtype: current discovery subsystem 00:34:46.085 treq: not specified, sq flow control disable supported 00:34:46.085 portid: 1 00:34:46.085 trsvcid: 4420 00:34:46.085 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:46.085 traddr: 10.0.0.1 00:34:46.085 eflags: none 00:34:46.085 sectype: none 00:34:46.085 =====Discovery Log Entry 1====== 00:34:46.085 trtype: tcp 00:34:46.085 adrfam: ipv4 00:34:46.085 subtype: nvme subsystem 00:34:46.085 treq: not specified, sq flow control disable supported 00:34:46.085 portid: 1 00:34:46.085 trsvcid: 4420 00:34:46.085 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:46.085 traddr: 10.0.0.1 00:34:46.085 eflags: none 00:34:46.085 sectype: none 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:46.085 15:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:49.370 Initializing NVMe Controllers 00:34:49.370 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:49.370 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:49.370 Initialization complete. Launching workers. 00:34:49.370 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92853, failed: 0 00:34:49.370 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92853, failed to submit 0 00:34:49.370 success 0, unsuccessful 92853, failed 0 00:34:49.370 15:43:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:49.370 15:43:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.656 Initializing NVMe Controllers 00:34:52.657 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:52.657 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:52.657 Initialization complete. Launching workers. 00:34:52.657 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146560, failed: 0 00:34:52.657 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36662, failed to submit 109898 00:34:52.657 success 0, unsuccessful 36662, failed 0 00:34:52.657 15:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.657 15:43:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.943 Initializing NVMe Controllers 00:34:55.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.943 Initialization complete. Launching workers. 00:34:55.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138743, failed: 0 00:34:55.943 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34742, failed to submit 104001 00:34:55.943 success 0, unsuccessful 34742, failed 0 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:55.943 15:43:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.477 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:58.477 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:59.414 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:59.414 00:34:59.414 real 0m17.559s 00:34:59.414 user 0m9.113s 00:34:59.414 sys 0m5.110s 00:34:59.414 15:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.414 15:44:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.414 ************************************ 00:34:59.414 END TEST kernel_target_abort 00:34:59.414 ************************************ 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:59.414 rmmod nvme_tcp 00:34:59.414 rmmod nvme_fabrics 00:34:59.414 rmmod nvme_keyring 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2433850 ']' 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2433850 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2433850 ']' 00:34:59.414 15:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2433850 00:34:59.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2433850) - No such process 00:34:59.415 15:44:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2433850 is not found' 00:34:59.415 Process with pid 2433850 is not found 00:34:59.415 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:59.415 15:44:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:02.704 Waiting for block devices as requested 00:35:02.704 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:02.704 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:02.704 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:02.964 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.964 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:02.964 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:02.964 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:03.224 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:03.224 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:03.224 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:03.483 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:03.483 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:03.483 15:44:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.018 15:44:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.018 00:35:06.018 real 0m48.370s 00:35:06.018 user 1m7.093s 00:35:06.018 sys 0m16.515s 00:35:06.018 15:44:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.018 15:44:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.018 ************************************ 00:35:06.018 END TEST nvmf_abort_qd_sizes 00:35:06.018 ************************************ 00:35:06.018 15:44:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:06.018 15:44:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:06.018 15:44:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.018 15:44:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.018 ************************************ 00:35:06.018 START TEST keyring_file 00:35:06.018 ************************************ 00:35:06.018 15:44:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:06.018 * Looking for test storage... 00:35:06.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:06.018 15:44:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:06.018 15:44:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:06.018 15:44:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:06.018 15:44:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:06.018 15:44:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:06.019 15:44:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.019 15:44:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:06.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.019 --rc genhtml_branch_coverage=1 00:35:06.019 --rc genhtml_function_coverage=1 00:35:06.019 --rc genhtml_legend=1 00:35:06.019 --rc geninfo_all_blocks=1 00:35:06.019 --rc geninfo_unexecuted_blocks=1 00:35:06.019 00:35:06.019 ' 00:35:06.019 15:44:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:06.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.019 --rc genhtml_branch_coverage=1 00:35:06.019 --rc genhtml_function_coverage=1 00:35:06.019 --rc genhtml_legend=1 00:35:06.019 --rc geninfo_all_blocks=1 00:35:06.019 --rc geninfo_unexecuted_blocks=1 00:35:06.019 00:35:06.019 ' 00:35:06.019 15:44:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:06.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.019 --rc genhtml_branch_coverage=1 00:35:06.019 --rc genhtml_function_coverage=1 00:35:06.019 --rc genhtml_legend=1 00:35:06.019 --rc geninfo_all_blocks=1 00:35:06.019 --rc geninfo_unexecuted_blocks=1 00:35:06.019 00:35:06.019 ' 00:35:06.019 15:44:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:06.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.019 --rc genhtml_branch_coverage=1 00:35:06.019 --rc genhtml_function_coverage=1 00:35:06.019 --rc genhtml_legend=1 00:35:06.019 --rc geninfo_all_blocks=1 00:35:06.019 --rc geninfo_unexecuted_blocks=1 00:35:06.019 00:35:06.019 ' 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.019 15:44:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.019 15:44:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.019 15:44:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.019 15:44:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.019 15:44:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:06.019 15:44:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.019 15:44:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:06.019 15:44:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:06.019 15:44:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wFAdr0d7gs 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wFAdr0d7gs 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wFAdr0d7gs 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wFAdr0d7gs 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FoUJ3fGdaS 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:06.020 15:44:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FoUJ3fGdaS 00:35:06.020 15:44:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FoUJ3fGdaS 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FoUJ3fGdaS 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=2442625 00:35:06.020 15:44:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2442625 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2442625 ']' 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.020 15:44:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.020 [2024-11-20 15:44:09.830771] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:35:06.020 [2024-11-20 15:44:09.830821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442625 ] 00:35:06.020 [2024-11-20 15:44:09.906665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.278 [2024-11-20 15:44:09.950069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.278 15:44:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.278 15:44:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.278 15:44:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:06.278 15:44:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.278 15:44:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.278 [2024-11-20 15:44:10.169494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.538 null0 00:35:06.538 [2024-11-20 15:44:10.201546] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:06.538 [2024-11-20 15:44:10.201916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.538 15:44:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.538 [2024-11-20 15:44:10.229611] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:06.538 request: 00:35:06.538 { 00:35:06.538 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.538 "secure_channel": false, 00:35:06.538 "listen_address": { 00:35:06.538 "trtype": "tcp", 00:35:06.538 "traddr": "127.0.0.1", 00:35:06.538 "trsvcid": "4420" 00:35:06.538 }, 00:35:06.538 "method": "nvmf_subsystem_add_listener", 00:35:06.538 "req_id": 1 00:35:06.538 } 00:35:06.538 Got JSON-RPC error response 00:35:06.538 response: 00:35:06.538 { 00:35:06.538 "code": -32602, 00:35:06.538 "message": "Invalid parameters" 00:35:06.538 } 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.538 15:44:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=2442631 00:35:06.538 15:44:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2442631 /var/tmp/bperf.sock 00:35:06.538 15:44:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2442631 ']' 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.538 15:44:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:06.538 [2024-11-20 15:44:10.281547] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:35:06.538 [2024-11-20 15:44:10.281592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442631 ] 00:35:06.538 [2024-11-20 15:44:10.355690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.538 [2024-11-20 15:44:10.398545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.797 15:44:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.797 15:44:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.797 15:44:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:06.797 15:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:06.797 15:44:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FoUJ3fGdaS 00:35:06.797 15:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FoUJ3fGdaS 00:35:07.056 15:44:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:07.056 15:44:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:07.056 15:44:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.056 15:44:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.056 15:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.315 15:44:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wFAdr0d7gs == \/\t\m\p\/\t\m\p\.\w\F\A\d\r\0\d\7\g\s ]] 00:35:07.315 15:44:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:07.315 15:44:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:07.315 15:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.315 15:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.315 15:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.573 15:44:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.FoUJ3fGdaS == \/\t\m\p\/\t\m\p\.\F\o\U\J\3\f\G\d\a\S ]] 00:35:07.573 15:44:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.573 15:44:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:07.573 15:44:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.573 15:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.832 15:44:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:07.832 15:44:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.832 15:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:08.091 [2024-11-20 15:44:11.828722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:08.091 nvme0n1 00:35:08.091 15:44:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:08.091 15:44:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.091 15:44:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.091 15:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.091 15:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.091 15:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.350 15:44:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:08.350 15:44:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:08.350 15:44:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.350 15:44:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.350 15:44:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.350 15:44:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.350 15:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.609 15:44:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:08.609 15:44:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.609 Running I/O for 1 seconds... 00:35:09.546 18665.00 IOPS, 72.91 MiB/s 00:35:09.546 Latency(us) 00:35:09.546 [2024-11-20T14:44:13.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.546 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:09.546 nvme0n1 : 1.00 18710.35 73.09 0.00 0.00 6827.88 2849.39 17894.18 00:35:09.546 [2024-11-20T14:44:13.454Z] =================================================================================================================== 00:35:09.546 [2024-11-20T14:44:13.454Z] Total : 18710.35 73.09 0.00 0.00 6827.88 2849.39 17894.18 00:35:09.546 { 00:35:09.546 "results": [ 00:35:09.546 { 00:35:09.546 "job": "nvme0n1", 00:35:09.546 "core_mask": "0x2", 00:35:09.546 "workload": "randrw", 00:35:09.546 "percentage": 50, 00:35:09.546 "status": "finished", 00:35:09.546 "queue_depth": 128, 00:35:09.546 "io_size": 4096, 00:35:09.546 "runtime": 1.004471, 00:35:09.546 "iops": 18710.34604284245, 00:35:09.546 "mibps": 73.08728922985333, 00:35:09.546 "io_failed": 0, 00:35:09.546 "io_timeout": 0, 00:35:09.546 "avg_latency_us": 6827.881961217964, 00:35:09.546 "min_latency_us": 2849.391304347826, 00:35:09.546 "max_latency_us": 17894.177391304347 00:35:09.546 } 00:35:09.546 ], 00:35:09.546 "core_count": 1 00:35:09.546 } 00:35:09.546 15:44:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:09.546 15:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:09.805 15:44:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:09.805 15:44:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.805 15:44:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.805 15:44:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.805 15:44:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.805 15:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.064 15:44:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:10.064 15:44:13 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:10.064 15:44:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.064 15:44:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.064 15:44:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.064 15:44:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.064 15:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.323 15:44:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:10.323 15:44:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.323 15:44:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:10.323 15:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:10.323 [2024-11-20 15:44:14.204138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:10.323 [2024-11-20 15:44:14.205047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164fd00 (107): Transport endpoint is not connected 00:35:10.323 [2024-11-20 15:44:14.206042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164fd00 (9): Bad file descriptor 00:35:10.323 [2024-11-20 15:44:14.207043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:10.323 [2024-11-20 15:44:14.207059] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:10.323 [2024-11-20 15:44:14.207067] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:10.323 [2024-11-20 15:44:14.207075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:10.323 request: 00:35:10.323 { 00:35:10.323 "name": "nvme0", 00:35:10.323 "trtype": "tcp", 00:35:10.323 "traddr": "127.0.0.1", 00:35:10.323 "adrfam": "ipv4", 00:35:10.323 "trsvcid": "4420", 00:35:10.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.323 "prchk_reftag": false, 00:35:10.323 "prchk_guard": false, 00:35:10.323 "hdgst": false, 00:35:10.323 "ddgst": false, 00:35:10.323 "psk": "key1", 00:35:10.323 "allow_unrecognized_csi": false, 00:35:10.323 "method": "bdev_nvme_attach_controller", 00:35:10.323 "req_id": 1 00:35:10.323 } 00:35:10.323 Got JSON-RPC error response 00:35:10.323 response: 00:35:10.323 { 00:35:10.323 "code": -5, 00:35:10.323 "message": "Input/output error" 00:35:10.323 } 00:35:10.582 15:44:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.582 15:44:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.582 15:44:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.582 15:44:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.582 15:44:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.582 15:44:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:10.582 15:44:14 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.582 15:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.841 15:44:14 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:10.841 15:44:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.841 15:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:11.100 15:44:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:11.100 15:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:11.359 15:44:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:11.359 15:44:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:11.359 15:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.359 15:44:15 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:11.359 15:44:15 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wFAdr0d7gs 00:35:11.359 15:44:15 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.359 15:44:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.359 15:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.618 [2024-11-20 15:44:15.389220] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wFAdr0d7gs': 0100660 00:35:11.618 [2024-11-20 15:44:15.389246] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:11.618 request: 00:35:11.618 { 00:35:11.618 "name": "key0", 00:35:11.618 "path": "/tmp/tmp.wFAdr0d7gs", 00:35:11.618 "method": "keyring_file_add_key", 00:35:11.618 "req_id": 1 00:35:11.618 } 00:35:11.618 Got JSON-RPC error response 00:35:11.619 response: 00:35:11.619 { 00:35:11.619 "code": -1, 00:35:11.619 "message": "Operation not permitted" 00:35:11.619 } 00:35:11.619 15:44:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:11.619 15:44:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.619 15:44:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.619 15:44:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.619 15:44:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wFAdr0d7gs 00:35:11.619 15:44:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.619 15:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wFAdr0d7gs 00:35:11.878 15:44:15 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wFAdr0d7gs 00:35:11.878 15:44:15 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:11.878 15:44:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.878 15:44:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.878 15:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.878 15:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.878 15:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.138 15:44:15 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:12.138 15:44:15 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.138 15:44:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.138 15:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.138 [2024-11-20 15:44:15.970775] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wFAdr0d7gs': No such file or directory 00:35:12.138 [2024-11-20 15:44:15.970795] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:12.138 [2024-11-20 15:44:15.970810] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:12.138 [2024-11-20 15:44:15.970817] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:12.138 [2024-11-20 15:44:15.970825] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:12.138 [2024-11-20 15:44:15.970835] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:12.138 request: 00:35:12.138 { 00:35:12.138 "name": "nvme0", 00:35:12.138 "trtype": "tcp", 00:35:12.138 "traddr": "127.0.0.1", 00:35:12.138 "adrfam": "ipv4", 00:35:12.138 "trsvcid": "4420", 00:35:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:12.138 "prchk_reftag": false, 00:35:12.138 "prchk_guard": false, 00:35:12.138 "hdgst": false, 00:35:12.138 "ddgst": false, 00:35:12.138 "psk": "key0", 00:35:12.138 "allow_unrecognized_csi": false, 00:35:12.138 "method": "bdev_nvme_attach_controller", 00:35:12.138 "req_id": 1 00:35:12.138 } 00:35:12.138 Got JSON-RPC error response 00:35:12.138 response: 00:35:12.138 { 00:35:12.138 "code": -19, 00:35:12.138 "message": "No such device" 00:35:12.138 } 00:35:12.138 15:44:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:12.138 15:44:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.138 15:44:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.138 15:44:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.138 15:44:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:12.138 15:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:12.397 15:44:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.auZW8Rmmbo 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:12.397 15:44:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.auZW8Rmmbo 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.auZW8Rmmbo 00:35:12.397 15:44:16 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.auZW8Rmmbo 00:35:12.397 15:44:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.auZW8Rmmbo 00:35:12.397 15:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.auZW8Rmmbo 00:35:12.659 15:44:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.659 15:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.997 nvme0n1 00:35:12.997 15:44:16 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:12.997 15:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.997 15:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.997 15:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.997 15:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.997 15:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.300 15:44:16 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:13.300 15:44:16 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:13.300 15:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:13.300 15:44:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:13.300 15:44:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:13.300 15:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.300 15:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.300 15:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.616 15:44:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:13.616 15:44:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:13.616 15:44:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:13.616 15:44:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.616 15:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.873 15:44:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:13.873 15:44:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:13.873 15:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.131 15:44:17 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:14.131 15:44:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.auZW8Rmmbo 00:35:14.131 15:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.auZW8Rmmbo 00:35:14.389 15:44:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FoUJ3fGdaS 00:35:14.389 15:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FoUJ3fGdaS 00:35:14.648 15:44:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.648 15:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:14.648 nvme0n1 00:35:14.906 15:44:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:14.906 15:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:15.165 15:44:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:15.165 "subsystems": [ 00:35:15.165 { 00:35:15.165 "subsystem": "keyring", 00:35:15.165 "config": [ 00:35:15.165 { 00:35:15.165 "method": "keyring_file_add_key", 00:35:15.165 "params": { 00:35:15.165 "name": "key0", 00:35:15.165 "path": "/tmp/tmp.auZW8Rmmbo" 00:35:15.165 } 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "method": "keyring_file_add_key", 00:35:15.165 "params": { 00:35:15.165 "name": "key1", 00:35:15.165 "path": "/tmp/tmp.FoUJ3fGdaS" 00:35:15.165 } 00:35:15.165 } 00:35:15.165 ] 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "subsystem": "iobuf", 00:35:15.165 "config": [ 00:35:15.165 { 00:35:15.165 "method": "iobuf_set_options", 00:35:15.165 "params": { 00:35:15.165 "small_pool_count": 8192, 00:35:15.165 "large_pool_count": 1024, 00:35:15.165 "small_bufsize": 8192, 00:35:15.165 "large_bufsize": 135168, 00:35:15.165 "enable_numa": false 00:35:15.165 } 00:35:15.165 } 00:35:15.165 ] 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "subsystem": "sock", 00:35:15.165 "config": [ 00:35:15.165 { 00:35:15.165 "method": "sock_set_default_impl", 00:35:15.165 "params": { 00:35:15.165 "impl_name": "posix" 00:35:15.165 } 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "method": "sock_impl_set_options", 00:35:15.165 "params": { 00:35:15.165 "impl_name": "ssl", 00:35:15.165 "recv_buf_size": 4096, 00:35:15.165 "send_buf_size": 4096, 00:35:15.165 "enable_recv_pipe": true, 00:35:15.165 "enable_quickack": false, 00:35:15.165 "enable_placement_id": 0, 00:35:15.165 "enable_zerocopy_send_server": true, 00:35:15.165 "enable_zerocopy_send_client": false, 00:35:15.165 "zerocopy_threshold": 0, 00:35:15.165 "tls_version": 0, 00:35:15.165 "enable_ktls": false 00:35:15.165 } 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "method": "sock_impl_set_options", 00:35:15.165 "params": { 00:35:15.165 "impl_name": "posix", 00:35:15.165 "recv_buf_size": 2097152, 00:35:15.165 "send_buf_size": 2097152, 00:35:15.165 "enable_recv_pipe": true, 00:35:15.165 "enable_quickack": false, 00:35:15.165 "enable_placement_id": 0, 00:35:15.165 "enable_zerocopy_send_server": true, 00:35:15.165 "enable_zerocopy_send_client": false, 00:35:15.165 "zerocopy_threshold": 0, 00:35:15.165 "tls_version": 0, 00:35:15.165 "enable_ktls": false 00:35:15.165 } 00:35:15.165 } 00:35:15.165 ] 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "subsystem": "vmd", 00:35:15.165 "config": [] 00:35:15.165 }, 00:35:15.165 { 00:35:15.165 "subsystem": "accel", 00:35:15.165 "config": [ 00:35:15.165 { 00:35:15.165 "method": "accel_set_options", 00:35:15.165 "params": { 00:35:15.165 "small_cache_size": 128, 00:35:15.165 "large_cache_size": 16, 00:35:15.165 "task_count": 2048, 00:35:15.165 "sequence_count": 2048, 00:35:15.165 "buf_count": 2048 00:35:15.165 } 00:35:15.165 } 00:35:15.165 ] 00:35:15.165 }, 00:35:15.165 { 00:35:15.166 "subsystem": "bdev", 00:35:15.166 "config": [ 00:35:15.166 { 00:35:15.166 "method": "bdev_set_options", 00:35:15.166 "params": { 00:35:15.166 "bdev_io_pool_size": 65535, 00:35:15.166 "bdev_io_cache_size": 256, 00:35:15.166 "bdev_auto_examine": true, 00:35:15.166 "iobuf_small_cache_size": 128, 00:35:15.166 "iobuf_large_cache_size": 16 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_raid_set_options", 00:35:15.166 "params": { 00:35:15.166 "process_window_size_kb": 1024, 00:35:15.166 "process_max_bandwidth_mb_sec": 0 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_iscsi_set_options", 00:35:15.166 "params": { 00:35:15.166 "timeout_sec": 30 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_nvme_set_options", 00:35:15.166 "params": { 00:35:15.166 "action_on_timeout": "none", 00:35:15.166 "timeout_us": 0, 00:35:15.166 "timeout_admin_us": 0, 00:35:15.166 "keep_alive_timeout_ms": 10000, 00:35:15.166 "arbitration_burst": 0, 00:35:15.166 "low_priority_weight": 0, 00:35:15.166 "medium_priority_weight": 0, 00:35:15.166 "high_priority_weight": 0, 00:35:15.166 "nvme_adminq_poll_period_us": 10000, 00:35:15.166 "nvme_ioq_poll_period_us": 0, 00:35:15.166 "io_queue_requests": 512, 00:35:15.166 "delay_cmd_submit": true, 00:35:15.166 "transport_retry_count": 4, 00:35:15.166 "bdev_retry_count": 3, 00:35:15.166 "transport_ack_timeout": 0, 00:35:15.166 "ctrlr_loss_timeout_sec": 0, 00:35:15.166 "reconnect_delay_sec": 0, 00:35:15.166 "fast_io_fail_timeout_sec": 0, 00:35:15.166 "disable_auto_failback": false, 00:35:15.166 "generate_uuids": false, 00:35:15.166 "transport_tos": 0, 00:35:15.166 "nvme_error_stat": false, 00:35:15.166 "rdma_srq_size": 0, 00:35:15.166 "io_path_stat": false, 00:35:15.166 "allow_accel_sequence": false, 00:35:15.166 "rdma_max_cq_size": 0, 00:35:15.166 "rdma_cm_event_timeout_ms": 0, 00:35:15.166 "dhchap_digests": [ 00:35:15.166 "sha256", 00:35:15.166 "sha384", 00:35:15.166 "sha512" 00:35:15.166 ], 00:35:15.166 "dhchap_dhgroups": [ 00:35:15.166 "null", 00:35:15.166 "ffdhe2048", 00:35:15.166 "ffdhe3072", 00:35:15.166 "ffdhe4096", 00:35:15.166 "ffdhe6144", 00:35:15.166 "ffdhe8192" 00:35:15.166 ] 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_nvme_attach_controller", 00:35:15.166 "params": { 00:35:15.166 "name": "nvme0", 00:35:15.166 "trtype": "TCP", 00:35:15.166 "adrfam": "IPv4", 00:35:15.166 "traddr": "127.0.0.1", 00:35:15.166 "trsvcid": "4420", 00:35:15.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.166 "prchk_reftag": false, 00:35:15.166 "prchk_guard": false, 00:35:15.166 "ctrlr_loss_timeout_sec": 0, 00:35:15.166 "reconnect_delay_sec": 0, 00:35:15.166 "fast_io_fail_timeout_sec": 0, 00:35:15.166 "psk": "key0", 00:35:15.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.166 "hdgst": false, 00:35:15.166 "ddgst": false, 00:35:15.166 "multipath": "multipath" 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_nvme_set_hotplug", 00:35:15.166 "params": { 00:35:15.166 "period_us": 100000, 00:35:15.166 "enable": false 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "bdev_wait_for_examine" 00:35:15.166 } 00:35:15.166 ] 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "subsystem": "nbd", 00:35:15.166 "config": [] 00:35:15.166 } 00:35:15.166 ] 00:35:15.166 }' 00:35:15.166 15:44:18 keyring_file -- keyring/file.sh@115 -- # killprocess 2442631 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2442631 ']' 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2442631 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442631 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442631' 00:35:15.166 killing process with pid 2442631 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@973 -- # kill 2442631 00:35:15.166 Received shutdown signal, test time was about 1.000000 seconds 00:35:15.166 00:35:15.166 Latency(us) 00:35:15.166 [2024-11-20T14:44:19.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.166 [2024-11-20T14:44:19.074Z] =================================================================================================================== 00:35:15.166 [2024-11-20T14:44:19.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.166 15:44:18 keyring_file -- common/autotest_common.sh@978 -- # wait 2442631 00:35:15.166 15:44:19 keyring_file -- keyring/file.sh@118 -- # bperfpid=2444151 00:35:15.166 15:44:19 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2444151 /var/tmp/bperf.sock 00:35:15.166 15:44:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2444151 ']' 00:35:15.166 15:44:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.166 15:44:19 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:15.166 15:44:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.166 15:44:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.166 15:44:19 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:15.166 "subsystems": [ 00:35:15.166 { 00:35:15.166 "subsystem": "keyring", 00:35:15.166 "config": [ 00:35:15.166 { 00:35:15.166 "method": "keyring_file_add_key", 00:35:15.166 "params": { 00:35:15.166 "name": "key0", 00:35:15.166 "path": "/tmp/tmp.auZW8Rmmbo" 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "keyring_file_add_key", 00:35:15.166 "params": { 00:35:15.166 "name": "key1", 00:35:15.166 "path": "/tmp/tmp.FoUJ3fGdaS" 00:35:15.166 } 00:35:15.166 } 00:35:15.166 ] 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "subsystem": "iobuf", 00:35:15.166 "config": [ 00:35:15.166 { 00:35:15.166 "method": "iobuf_set_options", 00:35:15.166 "params": { 00:35:15.166 "small_pool_count": 8192, 00:35:15.166 "large_pool_count": 1024, 00:35:15.166 "small_bufsize": 8192, 00:35:15.166 "large_bufsize": 135168, 00:35:15.166 "enable_numa": false 00:35:15.166 } 00:35:15.166 } 00:35:15.166 ] 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "subsystem": "sock", 00:35:15.166 "config": [ 00:35:15.166 { 00:35:15.166 "method": "sock_set_default_impl", 00:35:15.166 "params": { 00:35:15.166 "impl_name": "posix" 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "sock_impl_set_options", 00:35:15.166 "params": { 00:35:15.166 "impl_name": "ssl", 00:35:15.166 "recv_buf_size": 4096, 00:35:15.166 "send_buf_size": 4096, 00:35:15.166 "enable_recv_pipe": true, 00:35:15.166 "enable_quickack": false, 00:35:15.166 "enable_placement_id": 0, 00:35:15.166 "enable_zerocopy_send_server": true, 00:35:15.166 "enable_zerocopy_send_client": false, 00:35:15.166 "zerocopy_threshold": 0, 00:35:15.166 "tls_version": 0, 00:35:15.166 "enable_ktls": false 00:35:15.166 } 00:35:15.166 }, 00:35:15.166 { 00:35:15.166 "method": "sock_impl_set_options", 00:35:15.166 "params": { 00:35:15.166 "impl_name": "posix", 00:35:15.166 "recv_buf_size": 2097152, 00:35:15.166 "send_buf_size": 2097152, 00:35:15.166 "enable_recv_pipe": true, 00:35:15.166 "enable_quickack": false, 00:35:15.166 "enable_placement_id": 0, 00:35:15.166 "enable_zerocopy_send_server": true, 00:35:15.166 "enable_zerocopy_send_client": false, 00:35:15.166 "zerocopy_threshold": 0, 00:35:15.166 "tls_version": 0, 00:35:15.166 "enable_ktls": false 00:35:15.167 } 00:35:15.167 } 00:35:15.167 ] 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "subsystem": "vmd", 00:35:15.167 "config": [] 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "subsystem": "accel", 00:35:15.167 "config": [ 00:35:15.167 { 00:35:15.167 "method": "accel_set_options", 00:35:15.167 "params": { 00:35:15.167 "small_cache_size": 128, 00:35:15.167 "large_cache_size": 16, 00:35:15.167 "task_count": 2048, 00:35:15.167 "sequence_count": 2048, 00:35:15.167 "buf_count": 2048 00:35:15.167 } 00:35:15.167 } 00:35:15.167 ] 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "subsystem": "bdev", 00:35:15.167 "config": [ 00:35:15.167 { 00:35:15.167 "method": "bdev_set_options", 00:35:15.167 "params": { 00:35:15.167 "bdev_io_pool_size": 65535, 00:35:15.167 "bdev_io_cache_size": 256, 00:35:15.167 "bdev_auto_examine": true, 00:35:15.167 "iobuf_small_cache_size": 128, 00:35:15.167 "iobuf_large_cache_size": 16 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_raid_set_options", 00:35:15.167 "params": { 00:35:15.167 "process_window_size_kb": 1024, 00:35:15.167 "process_max_bandwidth_mb_sec": 0 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_iscsi_set_options", 00:35:15.167 "params": { 00:35:15.167 "timeout_sec": 30 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_nvme_set_options", 00:35:15.167 "params": { 00:35:15.167 "action_on_timeout": "none", 00:35:15.167 "timeout_us": 0, 00:35:15.167 "timeout_admin_us": 0, 00:35:15.167 "keep_alive_timeout_ms": 10000, 00:35:15.167 "arbitration_burst": 0, 00:35:15.167 "low_priority_weight": 0, 00:35:15.167 "medium_priority_weight": 0, 00:35:15.167 "high_priority_weight": 0, 00:35:15.167 "nvme_adminq_poll_period_us": 10000, 00:35:15.167 "nvme_ioq_poll_period_us": 0, 00:35:15.167 "io_queue_requests": 512, 00:35:15.167 "delay_cmd_submit": true, 00:35:15.167 "transport_retry_count": 4, 00:35:15.167 "bdev_retry_count": 3, 00:35:15.167 "transport_ack_timeout": 0, 00:35:15.167 "ctrlr_loss_timeout_sec": 0, 00:35:15.167 "reconnect_delay_sec": 0, 00:35:15.167 "fast_io_fail_timeout_sec": 0, 00:35:15.167 "disable_auto_failback": false, 00:35:15.167 "generate_uuids": false, 00:35:15.167 "transport_tos": 0, 00:35:15.167 "nvme_error_stat": false, 00:35:15.167 "rdma_srq_size": 0, 00:35:15.167 "io_path_stat": false, 00:35:15.167 "allow_accel_sequence": false, 00:35:15.167 "rdma_max_cq_size": 0, 00:35:15.167 "rdma_cm_event_timeout_ms": 0, 00:35:15.167 "dhchap_digests": [ 00:35:15.167 "sha256", 00:35:15.167 "sha384", 00:35:15.167 "sha512" 00:35:15.167 ], 00:35:15.167 "dhchap_dhgroups": [ 00:35:15.167 "null", 00:35:15.167 "ffdhe2048", 00:35:15.167 "ffdhe3072", 00:35:15.167 "ffdhe4096", 00:35:15.167 "ffdhe6144", 00:35:15.167 "ffdhe8192" 00:35:15.167 ] 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_nvme_attach_controller", 00:35:15.167 "params": { 00:35:15.167 "name": "nvme0", 00:35:15.167 "trtype": "TCP", 00:35:15.167 "adrfam": "IPv4", 00:35:15.167 "traddr": "127.0.0.1", 00:35:15.167 "trsvcid": "4420", 00:35:15.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.167 "prchk_reftag": false, 00:35:15.167 "prchk_guard": false, 00:35:15.167 "ctrlr_loss_timeout_sec": 0, 00:35:15.167 "reconnect_delay_sec": 0, 00:35:15.167 "fast_io_fail_timeout_sec": 0, 00:35:15.167 "psk": "key0", 00:35:15.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.167 "hdgst": false, 00:35:15.167 "ddgst": false, 00:35:15.167 "multipath": "multipath" 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_nvme_set_hotplug", 00:35:15.167 "params": { 00:35:15.167 "period_us": 100000, 00:35:15.167 "enable": false 00:35:15.167 } 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "method": "bdev_wait_for_examine" 00:35:15.167 } 00:35:15.167 ] 00:35:15.167 }, 00:35:15.167 { 00:35:15.167 "subsystem": "nbd", 00:35:15.167 "config": [] 00:35:15.167 } 00:35:15.167 ] 00:35:15.167 }' 00:35:15.167 15:44:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.167 15:44:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.425 [2024-11-20 15:44:19.077046] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:35:15.425 [2024-11-20 15:44:19.077096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444151 ] 00:35:15.425 [2024-11-20 15:44:19.151910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.425 [2024-11-20 15:44:19.195236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.684 [2024-11-20 15:44:19.355826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:16.252 15:44:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.252 15:44:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:16.252 15:44:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:16.252 15:44:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:16.252 15:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.252 15:44:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:16.252 15:44:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:16.252 15:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.252 15:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.252 15:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.252 15:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.252 15:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.510 15:44:20 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:16.510 15:44:20 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:16.510 15:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:16.510 15:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.510 15:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.510 15:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.510 15:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.767 15:44:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:16.767 15:44:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:16.767 15:44:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:16.767 15:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:17.026 15:44:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:17.026 15:44:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:17.026 15:44:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.auZW8Rmmbo /tmp/tmp.FoUJ3fGdaS 00:35:17.026 15:44:20 keyring_file -- keyring/file.sh@20 -- # killprocess 2444151 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2444151 ']' 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2444151 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444151 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444151' 00:35:17.026 killing process with pid 2444151 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@973 -- # kill 2444151 00:35:17.026 Received shutdown signal, test time was about 1.000000 seconds 00:35:17.026 00:35:17.026 Latency(us) 00:35:17.026 [2024-11-20T14:44:20.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.026 [2024-11-20T14:44:20.934Z] =================================================================================================================== 00:35:17.026 [2024-11-20T14:44:20.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:17.026 15:44:20 keyring_file -- common/autotest_common.sh@978 -- # wait 2444151 00:35:17.285 15:44:20 keyring_file -- keyring/file.sh@21 -- # killprocess 2442625 00:35:17.285 15:44:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2442625 ']' 00:35:17.285 15:44:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2442625 00:35:17.285 15:44:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:17.285 15:44:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.285 15:44:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442625 00:35:17.285 15:44:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.285 15:44:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.285 15:44:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442625' 00:35:17.285 killing process with pid 2442625 00:35:17.285 15:44:21 keyring_file -- common/autotest_common.sh@973 -- # kill 2442625 00:35:17.285 15:44:21 keyring_file -- common/autotest_common.sh@978 -- # wait 2442625 00:35:17.543 00:35:17.543 real 0m11.846s 00:35:17.543 user 0m29.479s 00:35:17.543 sys 0m2.666s 00:35:17.543 15:44:21 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.543 15:44:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:17.543 ************************************ 00:35:17.543 END TEST keyring_file 00:35:17.543 ************************************ 00:35:17.543 15:44:21 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:17.543 15:44:21 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:17.543 15:44:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:17.543 15:44:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.543 15:44:21 -- common/autotest_common.sh@10 -- # set +x 00:35:17.543 ************************************ 00:35:17.543 START TEST keyring_linux 00:35:17.543 ************************************ 00:35:17.543 15:44:21 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:17.543 Joined session keyring: 478792481 00:35:17.804 * Looking for test storage... 00:35:17.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:17.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.804 --rc genhtml_branch_coverage=1 00:35:17.804 --rc genhtml_function_coverage=1 00:35:17.804 --rc genhtml_legend=1 00:35:17.804 --rc geninfo_all_blocks=1 00:35:17.804 --rc geninfo_unexecuted_blocks=1 00:35:17.804 00:35:17.804 ' 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:17.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.804 --rc genhtml_branch_coverage=1 00:35:17.804 --rc genhtml_function_coverage=1 00:35:17.804 --rc genhtml_legend=1 00:35:17.804 --rc geninfo_all_blocks=1 00:35:17.804 --rc geninfo_unexecuted_blocks=1 00:35:17.804 00:35:17.804 ' 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:17.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.804 --rc genhtml_branch_coverage=1 00:35:17.804 --rc genhtml_function_coverage=1 00:35:17.804 --rc genhtml_legend=1 00:35:17.804 --rc geninfo_all_blocks=1 00:35:17.804 --rc geninfo_unexecuted_blocks=1 00:35:17.804 00:35:17.804 ' 00:35:17.804 15:44:21 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:17.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.804 --rc genhtml_branch_coverage=1 00:35:17.804 --rc genhtml_function_coverage=1 00:35:17.804 --rc genhtml_legend=1 00:35:17.804 --rc geninfo_all_blocks=1 00:35:17.804 --rc geninfo_unexecuted_blocks=1 00:35:17.804 00:35:17.804 ' 00:35:17.804 15:44:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:17.804 15:44:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.804 15:44:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.804 15:44:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.804 15:44:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.804 15:44:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.804 15:44:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.804 15:44:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:17.805 15:44:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:17.805 /tmp/:spdk-test:key0 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:17.805 15:44:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:17.805 15:44:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:17.805 /tmp/:spdk-test:key1 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2444706 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2444706 00:35:17.805 15:44:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2444706 ']' 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.805 15:44:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:18.064 [2024-11-20 15:44:21.747042] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:35:18.064 [2024-11-20 15:44:21.747091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444706 ] 00:35:18.064 [2024-11-20 15:44:21.821104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.064 [2024-11-20 15:44:21.863326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:18.323 [2024-11-20 15:44:22.076530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.323 null0 00:35:18.323 [2024-11-20 15:44:22.108585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:18.323 [2024-11-20 15:44:22.108973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:18.323 381264687 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:18.323 226559160 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2444717 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2444717 /var/tmp/bperf.sock 00:35:18.323 15:44:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2444717 ']' 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.323 15:44:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:18.323 [2024-11-20 15:44:22.181554] Starting SPDK v25.01-pre git sha1 ede20dc4e / DPDK 24.03.0 initialization... 00:35:18.323 [2024-11-20 15:44:22.181599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444717 ] 00:35:18.582 [2024-11-20 15:44:22.256598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.582 [2024-11-20 15:44:22.299501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.582 15:44:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.582 15:44:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:18.582 15:44:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:18.582 15:44:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:18.842 15:44:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:18.842 15:44:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:19.101 15:44:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:19.101 15:44:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:19.101 [2024-11-20 15:44:22.935404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:19.101 nvme0n1 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:19.360 15:44:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:19.360 15:44:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:19.360 15:44:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.360 15:44:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:19.360 15:44:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.618 15:44:23 keyring_linux -- keyring/linux.sh@25 -- # sn=381264687 00:35:19.618 15:44:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:19.619 15:44:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:19.619 15:44:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 381264687 == \3\8\1\2\6\4\6\8\7 ]] 00:35:19.619 15:44:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 381264687 00:35:19.619 15:44:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:19.619 15:44:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.619 Running I/O for 1 seconds... 00:35:20.993 21044.00 IOPS, 82.20 MiB/s 00:35:20.993 Latency(us) 00:35:20.993 [2024-11-20T14:44:24.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.993 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:20.993 nvme0n1 : 1.01 21043.98 82.20 0.00 0.00 6062.14 3262.55 8491.19 00:35:20.993 [2024-11-20T14:44:24.901Z] =================================================================================================================== 00:35:20.993 [2024-11-20T14:44:24.901Z] Total : 21043.98 82.20 0.00 0.00 6062.14 3262.55 8491.19 00:35:20.993 { 00:35:20.993 "results": [ 00:35:20.993 { 00:35:20.993 "job": "nvme0n1", 00:35:20.993 "core_mask": "0x2", 00:35:20.993 "workload": "randread", 00:35:20.993 "status": "finished", 00:35:20.993 "queue_depth": 128, 00:35:20.993 "io_size": 4096, 00:35:20.993 "runtime": 1.006131, 00:35:20.993 "iops": 21043.97936252834, 00:35:20.993 "mibps": 82.20304438487632, 00:35:20.993 "io_failed": 0, 00:35:20.993 "io_timeout": 0, 00:35:20.993 "avg_latency_us": 6062.141568815083, 00:35:20.993 "min_latency_us": 3262.553043478261, 00:35:20.993 "max_latency_us": 8491.186086956523 00:35:20.993 } 00:35:20.993 ], 00:35:20.993 "core_count": 1 00:35:20.993 } 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:20.993 15:44:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:20.993 15:44:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:20.993 15:44:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.252 15:44:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:21.252 15:44:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:21.252 15:44:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:21.252 15:44:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.252 15:44:24 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.252 15:44:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:21.252 [2024-11-20 15:44:25.148081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:21.252 [2024-11-20 15:44:25.148789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3fa70 (107): Transport endpoint is not connected 00:35:21.252 [2024-11-20 15:44:25.149784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3fa70 (9): Bad file descriptor 00:35:21.252 [2024-11-20 15:44:25.150785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:21.252 [2024-11-20 15:44:25.150795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:21.252 [2024-11-20 15:44:25.150803] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:21.252 [2024-11-20 15:44:25.150812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:21.252 request: 00:35:21.252 { 00:35:21.252 "name": "nvme0", 00:35:21.252 "trtype": "tcp", 00:35:21.252 "traddr": "127.0.0.1", 00:35:21.252 "adrfam": "ipv4", 00:35:21.252 "trsvcid": "4420", 00:35:21.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.252 "prchk_reftag": false, 00:35:21.252 "prchk_guard": false, 00:35:21.252 "hdgst": false, 00:35:21.252 "ddgst": false, 00:35:21.252 "psk": ":spdk-test:key1", 00:35:21.252 "allow_unrecognized_csi": false, 00:35:21.252 "method": "bdev_nvme_attach_controller", 00:35:21.252 "req_id": 1 00:35:21.252 } 00:35:21.252 Got JSON-RPC error response 00:35:21.252 response: 00:35:21.252 { 00:35:21.252 "code": -5, 00:35:21.252 "message": "Input/output error" 00:35:21.252 } 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@33 -- # sn=381264687 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 381264687 00:35:21.511 1 links removed 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@33 -- # sn=226559160 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 226559160 00:35:21.511 1 links removed 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2444717 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2444717 ']' 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2444717 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444717 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444717' 00:35:21.511 killing process with pid 2444717 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 2444717 00:35:21.511 Received shutdown signal, test time was about 1.000000 seconds 00:35:21.511 00:35:21.511 Latency(us) 00:35:21.511 [2024-11-20T14:44:25.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.511 [2024-11-20T14:44:25.419Z] =================================================================================================================== 00:35:21.511 [2024-11-20T14:44:25.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 2444717 00:35:21.511 15:44:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2444706 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2444706 ']' 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2444706 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.511 15:44:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444706 00:35:21.770 15:44:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.770 15:44:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.770 15:44:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444706' 00:35:21.770 killing process with pid 2444706 00:35:21.770 15:44:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 2444706 00:35:21.770 15:44:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 2444706 00:35:22.027 00:35:22.027 real 0m4.354s 00:35:22.027 user 0m8.238s 00:35:22.027 sys 0m1.422s 00:35:22.027 15:44:25 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.028 15:44:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:22.028 ************************************ 00:35:22.028 END TEST keyring_linux 00:35:22.028 ************************************ 00:35:22.028 15:44:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:22.028 15:44:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:22.028 15:44:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:22.028 15:44:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:22.028 15:44:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:22.028 15:44:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:22.028 15:44:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:22.028 15:44:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.028 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:35:22.028 15:44:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:22.028 15:44:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:22.028 15:44:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:22.028 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:35:27.303 INFO: APP EXITING 00:35:27.303 INFO: killing all VMs 00:35:27.303 INFO: killing vhost app 00:35:27.303 INFO: EXIT DONE 00:35:29.838 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:29.838 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:29.838 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:30.097 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:30.097 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:33.386 Cleaning 00:35:33.386 Removing: /var/run/dpdk/spdk0/config 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:33.386 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.386 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.386 Removing: /var/run/dpdk/spdk1/config 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:33.386 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:33.386 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:33.386 Removing: /var/run/dpdk/spdk2/config 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:33.386 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:33.386 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:33.386 Removing: /var/run/dpdk/spdk3/config 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:33.386 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:33.386 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:33.386 Removing: /var/run/dpdk/spdk4/config 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:33.386 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:33.386 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:33.386 Removing: /dev/shm/bdev_svc_trace.1 00:35:33.386 Removing: /dev/shm/nvmf_trace.0 00:35:33.386 Removing: /dev/shm/spdk_tgt_trace.pid1965805 00:35:33.386 Removing: /var/run/dpdk/spdk0 00:35:33.386 Removing: /var/run/dpdk/spdk1 00:35:33.386 Removing: /var/run/dpdk/spdk2 00:35:33.386 Removing: /var/run/dpdk/spdk3 00:35:33.386 Removing: /var/run/dpdk/spdk4 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1963654 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1964721 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1965805 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1966443 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1967390 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1967623 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1968600 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1968606 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1968958 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1970489 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1971743 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1972042 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1972329 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1972633 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1972923 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1973173 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1973427 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1973710 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1974462 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1977464 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1977722 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1977976 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1977982 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1978476 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1978483 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1978975 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1978984 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1979284 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1979472 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1979630 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1979733 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1980241 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1980417 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1980745 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1984556 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1988869 00:35:33.386 Removing: /var/run/dpdk/spdk_pid1999656 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2000219 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2004530 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2004867 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2009135 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2015026 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2017637 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2027843 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2036902 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2038618 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2039547 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2057148 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2061219 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2106068 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2111430 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2117206 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2123746 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2123840 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2124611 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2125524 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2126452 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2126959 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2127138 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2127373 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2127382 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2127395 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2128305 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2129214 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2130128 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2130818 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2130820 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2131059 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2132076 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2133062 00:35:33.386 Removing: /var/run/dpdk/spdk_pid2141487 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2170723 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2175227 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2176976 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2179194 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2179431 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2179523 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2179682 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2180187 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2182022 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2182793 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2183289 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2185494 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2185883 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2186595 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2190864 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2196278 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2196279 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2196280 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2200069 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2208612 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2212429 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2218635 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2220067 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2221795 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2223317 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2227920 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2232240 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2236164 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2243682 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2243753 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2248249 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2248487 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2248716 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2249173 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2249178 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2253661 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2254229 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2258602 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2261351 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2266722 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2272220 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2281390 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2288606 00:35:33.387 Removing: /var/run/dpdk/spdk_pid2288609 00:35:33.646 Removing: /var/run/dpdk/spdk_pid2307486 00:35:33.646 Removing: /var/run/dpdk/spdk_pid2308094 00:35:33.646 Removing: /var/run/dpdk/spdk_pid2308569 00:35:33.646 Removing: /var/run/dpdk/spdk_pid2309254 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2309793 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2310466 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2310940 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2311424 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2315672 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2315906 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2322486 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2322539 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2328006 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2332248 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2342180 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2342735 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2346984 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2347225 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2351275 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2357089 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2359599 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2370151 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2378829 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2380501 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2381365 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2397695 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2401501 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2404193 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2412075 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2412087 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2417621 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2419462 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2421428 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2422583 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2424572 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2425722 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2434469 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2434927 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2435386 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2437664 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2438218 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2438773 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2442625 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2442631 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2444151 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2444706 00:35:33.647 Removing: /var/run/dpdk/spdk_pid2444717 00:35:33.647 Clean 00:35:33.906 15:44:37 -- common/autotest_common.sh@1453 -- # return 0 00:35:33.906 15:44:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:33.906 15:44:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.906 15:44:37 -- common/autotest_common.sh@10 -- # set +x 00:35:33.906 15:44:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:33.906 15:44:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.906 15:44:37 -- common/autotest_common.sh@10 -- # set +x 00:35:33.906 15:44:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:33.906 15:44:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:33.906 15:44:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:33.906 15:44:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:33.906 15:44:37 -- spdk/autotest.sh@398 -- # hostname 00:35:33.906 15:44:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:33.906 geninfo: WARNING: invalid characters removed from testname! 00:35:55.848 15:44:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.757 15:45:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.664 15:45:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:01.568 15:45:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:03.473 15:45:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.380 15:45:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:07.281 15:45:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:07.281 15:45:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:07.281 15:45:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:07.281 15:45:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:07.281 15:45:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:07.281 15:45:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:07.281 + [[ -n 1886405 ]] 00:36:07.281 + sudo kill 1886405 00:36:07.291 [Pipeline] } 00:36:07.306 [Pipeline] // stage 00:36:07.311 [Pipeline] } 00:36:07.326 [Pipeline] // timeout 00:36:07.331 [Pipeline] } 00:36:07.345 [Pipeline] // catchError 00:36:07.350 [Pipeline] } 00:36:07.365 [Pipeline] // wrap 00:36:07.371 [Pipeline] } 00:36:07.385 [Pipeline] // catchError 00:36:07.394 [Pipeline] stage 00:36:07.397 [Pipeline] { (Epilogue) 00:36:07.410 [Pipeline] catchError 00:36:07.412 [Pipeline] { 00:36:07.425 [Pipeline] echo 00:36:07.427 Cleanup processes 00:36:07.433 [Pipeline] sh 00:36:07.719 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.719 2455863 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.733 [Pipeline] sh 00:36:08.018 ++ grep -v 'sudo pgrep' 00:36:08.018 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:08.018 ++ awk '{print $1}' 00:36:08.018 + sudo kill -9 00:36:08.018 + true 00:36:08.030 [Pipeline] sh 00:36:08.314 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:20.530 [Pipeline] sh 00:36:20.814 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:20.814 Artifacts sizes are good 00:36:20.831 [Pipeline] archiveArtifacts 00:36:20.865 Archiving artifacts 00:36:21.000 [Pipeline] sh 00:36:21.321 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:21.337 [Pipeline] cleanWs 00:36:21.347 [WS-CLEANUP] Deleting project workspace... 00:36:21.347 [WS-CLEANUP] Deferred wipeout is used... 00:36:21.354 [WS-CLEANUP] done 00:36:21.356 [Pipeline] } 00:36:21.375 [Pipeline] // catchError 00:36:21.387 [Pipeline] sh 00:36:21.670 + logger -p user.info -t JENKINS-CI 00:36:21.679 [Pipeline] } 00:36:21.694 [Pipeline] // stage 00:36:21.700 [Pipeline] } 00:36:21.715 [Pipeline] // node 00:36:21.722 [Pipeline] End of Pipeline 00:36:21.755 Finished: SUCCESS